Thursday, 18 October 2007

What makes X hard?

Mark Guzdial has written a nice article, What makes programming so hard?

But I say, "Programming: Easy. Dancing: Hard." What he is really talking about is the bimodal nature of skill acquisition. In many areas: Some people really get it; and others don't get it at all, and then there is the group in the middle who kinda get it.

Of course this, split is not accepted in certain fundamental skills: Talking, walking, self-feeding, toileting, where only those with significant disabilities are excused a reasonable competence.

In fact, everyone will find many things hard and many things easy. For example, when I audited a drama subject as part of a teaching degree I found a bunch of (lovely) people who united in their terror of mathematics. In my mathematics subjects, I am sure that I would have found some who felt the same way about getting up on stage.

So what are the elements of (basic) programming?

* Composition
* Decomposition
* Visualization
* Precision and clarity
* Use of and acquisition of formal language
* Memory
* Planning
* Logic
* Numeracy

Deficiency in any of these is likely to lead to frustration. But I maintain that there are two qualities that are necessary for any kind of learning:

* Wanting to learn X
* Patience when an element of X does not come easily

Another factor is having learned something in the past which interferes with the new area of learning. Examples:

* Tennis requires a firm wrist on impact; squash a flexible wrist
* Riding a bike caused me to almost throw myself off an adult-sized trike
* My English grammar makes it hard for me to adapt to Hebrew
* "You can teach anyone Lisp in 1 day, but it takes 3 if they already know C" (the use of parentheses is radically different)

If someone really wants to learn something, and is patient and quietly determined, and has access to an instructor who can help in the basic areas, learning is possible, and may be surprisingly fruitful.

One test of a dance teacher is whether (s)he can teach someone who lacks a sense of rhythm or is uncoordinated can teach these fundamentals, not just so that they can be applied in learning dances, but are transferable to other areas.

Similarly for other teachers.

Monday, 17 September 2007

How many boys? How many girls?

I hadn’t heard this one before, but I like it a lot:

In a country in which people only want boys, every family continues to have children until they have a boy. If they have a girl, they have another child. If they have a boy, they stop.

What is the proportion of boys to girls in the country?

Apparently used as a Google interview question (not that I’m a great fan of puzzle-based interviewing).

My long solution first

Distribution of families as proportions of all families in the country:

½: B
¼: GB
1/8: GGB
1/16: GGGB
etc.

Let F be the number of families.

How many boys?

Since every family stops after they get a boy the number of boys is F.

Alternatively, we can count the contributions of the different families:

Total boys = F x (½ + ¼ + 1/8 + 1/16 + ...)

This demonstrates that the infinite sequence

½ + ¼ + 1/8 + 1/16 + …

sums to 1, which is also apparent if you stand 1 meter from a wall and 50 cm then 25 cm etc., etc.

How many girls?

Again we sum:

Total girls / F = ¼ x 1 + 1/8 x 2 + 1/16 x 3 + …

= 1/4 + 1/8 + 1/16 + …
+ 1/8 + 1/16 + …
+ 1/16 + …
+ …

= 1/2 x (½ + ¼ + 1/8 + 1/16 + ...)
+ 1/4 x (½ + ¼ + 1/8 + 1/16 + ...)
+ 1/8 x (½ + ¼ + 1/8 + 1/16 + ...)
+ …

= 1/2 x 1
+ 1/4 x 1
+ 1/8 x 1
+ …

= ½ + ¼ + 1/8 + 1/16 + ...

= 1

So, Total girls = F


Solution

Equal numbers!


Simplifying assumptions?

Birth rate of 50-50

  • No multiple births
  • Large population
  • No account made of multiple generations

Even without these assumptions, I would guess that the solution roughly holds.

Simple solution

Each child born has a 50-50 chance of being a boy or a girl. Each birth is independent, and not influenced by decisions of this, or any other family, so naturally half are boys and half are girls.



Sunday, 16 September 2007

What's important?

Computer scientist and mathematician Richard Hammond -- he of the Hamming Code -- used to ask his colleagues two questions. In order:
  1. What the most important issues in your field?, and the follow-up
  2. Why aren't you working on those?
Often this induced a cold-shoulder response, but others were grateful for the nudge.

I have been meditating on what the equivalent question should be in a business, as opposed to Science. My tentative questions are:
  1. What are the most important issues facing our customers?
  2. Why aren't we working on those?
From this it follows that deeply understanding our customers' issues is of paramount importance.

Sunday, 26 August 2007

Academic Cross-Training

Cross-training is not a new idea.

In most professional sports cross-training is incorporated into the usual training regime. Although weight-training and swimming are popular there are more exotic options around. Some Australian Rules footballers have even dabbled in ballet.

In cultures that emphasize the development of the individual as well as excellence, breadth and depth are valued. And for those who like the etymological definition of philosophy -- love of learning -- this is a bit of a no-brainer.

Now, if you are focussed on a narrow goal over a wide-ranging journey the diversions of breadth may prove a waste of time. But if the converse holds you will find that there is much to learn by exploring other disciplines. Here's what happened when physicist Richard Feynman ventured into biology.

If two academic disciplines are dealing with similar material at a deep level, chances are that each has something to offer an individual who crosses over from the other side.

Of course forcing everyone to compulsorily study X, usually leads to resentment from a significant proportion of those so conscripted, so when I say should study, I really mean should be encouraged to study.

In terms of excellence, someone should make a list of people who have achieved excellence after switching fields.

* * *

In sports, it is well-known that gymnasts do well after switching to diving and ski-jumping. From this I infer that gymnastics teaches transferable skills.

What are the nominations for the gymnastics of academic disciplines?


Thursday, 23 August 2007

The Programming Djinni Grants Wishes

As a commercial product develops the wish-list grows longer and longer. User suggestions and feedback are collated and prioritized. Pretty soon you have a very large list.

The tragedy of the C cases
Here's the problem: Lots of worthy but little items are continually pushed to the bottom of the pile, by the latest big feature.

I call this "the tragedy of the C cases". These items of functionality may languish, if proper discipline is respected, never bubbling to the top.

There are disciplined ways around this use Scrum and XP style planning, but these require buy-in from the non-programmers and may not be achievable within your organization.

The magic approach
Here's a simple and fun alternative for a small company or department: One day per month -- customarily on the 23rd (or the nearest weekday) -- one or more programmers turn into djinnis who attempt to grant reasonable and achievable wishes to one lucky person in the company.

This breaks up the usual routine (a good thing as per the Hawthorne effect), is engaging for the lucky person -- who sees some immediate results -- and it is also good for the programmers, who get the satisfaction of working directly with a tangible person for a change rather than through a mediated priority list. And the software gets nicely polished.

Try it: See how you go; let me know.

Tuesday, 21 August 2007

Re-write or re-factor?

On individual projects I have usually found that a re-write leads to smaller, cleaner, faster solutions.

I attribute this to learning acquired from the previous attempts/versions, which I can incorporate in the form of improved abstractions and better trade-offs subsequently.

Some re-use of "golden nuggets" from earlier iterations may also be possible, and is certainly desirable.

Some lessons learned can be incorporated incrementally through re-factorings, but there are times when incremental improvement takes you to a local maximum and traps you there.

On big commercial projects other considerations come into play. Until the new code-base is up you need to contend with the cost of parallel development, and this period will be longer the greater the legacy. Unless, of course, your new abstractions are brilliantly efficient, and/or you can cut away a lot of stuff that was not needed.

Once a project is sufficiently large, given finite resources, it may eventually be too late to ever re-write!

Here's some more useful discussion by Adam Turoff on these issues prompted by survey question by Ovid.

Is Law a Branch of Computer Science?

I studied Law as well (as Science) early in my University career before relinquishing it on account of near-terminal boredom during Contracts (the content of the first third of which ironically proved quite useful to me subsequently).

I remember a visiting lecture by a Government draftsperson, whose job it was to draft legislation. He also happened to be blind. Aha, I thought, this is why legislation is so appallingly structured. That was undoubtedly unfair.

Later I came to the view that lawyers should study programming in order to learn how to structure large descriptions about processes and contingencies.

Now, Dave has come along with this brilliant comparison of legal language and programming languages. It is funny because there are several truths in there.

So, perhaps in the 21st century it is time to include a compulsory "programming for lawyers" unit, along with "legal process" as an introductory subject?

This would extend the wider view of Computer Science as a Natural Science, to cultural endeavours such as Law.

Monday, 20 August 2007

Calculus? Which Calculus?

In high school and University I learned Calculus, by which I mean the Differential and Integral Calculus of Newton and Leibniz, and their extensions. I do not think that I really grasped the meaning of a continuous function until I studied metric topology.

I like calculus, I have used it professionally, and it is a glory of the modern age, but I do question the cost/benefit of teaching it to millions (billions?) of children on account of its difficulty.

What makes calculus hard?
I would say the relatively high level of abstraction, and in particular becoming comfortable with with either limits or infinitesimals (hello: non-standard Analysis). These things strain our intuition.

Why is calculus taught in high-school? My guesses:
  • It is essential for physics and engineering, and other quantitative fields
  • It is very beautiful and powerful (but you will not see that at high school)
  • It is challenging
  • It has a filtering effect on students
  • Tradition
Since it has to be re-taught at College / University, I wonder whether it might be time to start teaching other Calculi in high school, just to mix things up a bit.

Perhaps a progressive school could hold a "calculus bake-off" and try to gauge the suitability and broad benefits of teaching and learning the various calculi?

Which other calculi could be taught?
Some might opt for the predicate calculus (also known as first-order logic), but my vote goes to the lambda calculus, which is fundamental to the theory of computing, and useful in practice. In fact there are computer languages such as Scheme (a Lisp dialect), ML, and Haskell that are essentially souped-up forms of the lambda calculus.

And if high-school sounds a bit late to get started with such an important subject, here is a game designed for eight year-olds that introduces the essential ideas.

Thursday, 16 August 2007

There is no such thing as Computer Science

Not my words, but from a rather enjoyable rant, What's wrong with CS research. Here's the bit containing the best quote (my emphasis):
So here's the first thing that's wrong with CS research: there's no such thing as CS research. First, there is no such thing as "computer science." Except for a few performance tests and the occasional usability study, nothing any CS researcher does has anything to do with the Scientific Method. Second, there is no such thing as "research." Any activity which is not obviously productive can be described as "research." The word is entirely meaningless. All just semantics, of course, but it's hardly a good sign that even the name is fraudulent.

When we look at what "CS researchers" actually do, we see three kinds of people. We can describe them roughly as creative programmers, mathematicians, and bureaucrats.
If only he had claimed that there was no such thing as Science, now that would have been a fun proposition to explore!

Sunday, 12 August 2007

Learning about (Computational) Monads

I am interested in acquiring programming idioms -- or if you like patterns -- that facilitate a clear and disciplined approach to organizing state. As applications grow large and certain operations transform state in a large-scale fashion, it becomes harder to understand and modify program behavior.

Perhaps monads offer a better way, but I don't really get them yet ...

What are Monads?


I think Leibniz invented the term monad to describe some kind of indivisible fundamental entity in his philosophy: Presumably the term atom was too Greek for him. Anyway: I do not mean philosophical monads, or biological monads, or mathematical -- category theory -- monads (although now we are getting warmer).

Right now I mean computational monads, the ones so beloved of Haskell programmers, but potentially of use elsewhere.
A monad is a family of types M t, based on a polymorphic type constructor M, with functions

return :: t -> M t
(>>=) :: M t -> (t -> M u) -> M u

satisfying

return a >>f = f a
m >>= return = m
m >>= (\a -> (f a) >>=g) = (m >>= f) >>= g
This definition gave me flashbacks to Pure Mathematics Honours (4th year University) where I encountered some classes so abstract as to seem totally detached from reality: "Hello Banach Spaces and Algrebras!"

How to get it

But having survived a mathematics education and having a ridiculously high sense of self-efficacy I know how to get through this kind of obstacle. There are two methods:
  1. Abstractions first: Treat this as a game where the rules are given, and play with them until they become familiar and seem natural.
  2. Empirical approach: Walk in the footsteps of the discoverers by building some practical experience first, so that the abstractions begin to make sense.
Better still, work from both ends. Anyway, both approaches require doing, as opposed to reading, and I am starting with the following tutorial:

You Could Have Invented Monads! (And Maybe You Already Have)

which seems to be helping, and belongs to the latter camp.

More later ...


Thursday, 9 August 2007

Functional Programming Interest Group

An idea has been slowly germinating in my mind: Try to start a new special interest group (SIG) meeting in Melbourne once a month, but also with a virtual presence.

I have been working slowly through Abelman & Sussman's Structure and Interpretation of Computer Programming (SICP) and exercises, the MIT classic "introductory" text, but would like to share the experience. The great thing about SICP is that it is free online, as are accompanying video lectures -- the first was inspirational, but subsequently I have relied on the text. The tools are available online and free. The language used is Scheme (a teaching/research Lisp), but it's the ideas that come through.

There are other books out there which promise to capture exciting ideas using various f.p.-ish languages as vehicles:
  • Hudak, The Haskell School of Expression
  • Armstrong, Programming in Erlang
  • Siebel, Practical Common Lisp (text available online)
I believe that languages such as these are worth studying for the ideas that they develop:

Haskell: Lazy evaluation, monadic computation, mathematical modelling, and more ...
Erlang: Practical, reliable, massive parallelism, ...
Lisps: Code-data duality

I am also interested in F# (as the .NET representative of the ML / OCAML) family, but am waiting on the publication and review of more books. [Forgive me for leaving Smalltalk, Forth, plus the various languages du jour off my list: There may well be too many already.]

Other probably books worth looking into:
  • Graham, On Lisp
  • Norvig et al, PAIP and AI: A modern approach
  • Kiczales, The Art of the Meta-Object Protocol
  • Sussman and Wisdom, Structure and Interpretation of Classical Mechanics (text available online)
  • Doets and van Eijck, The Haskell Road to Logic, Maths and Programming
  • Okasaki, Purely Functional Data Structures
Of course these are just some highly rated books, and not the literature, but my objective here is to survey some of the more mature parts of field; the leading edge can wait.

It's all about the joy of new and ongoing learning; with the bonus of becoming more skillful, productive, and having a chance of successfully working with the coming generation of massively multi-core machines.

I would be interested in hearing expressions of interest, suggestions for format plus any experiences and tips from anyone involved in such groups.

Sunday, 22 July 2007

XML and ASN.1

In 2002 -- 5 years ago! -- I spent a year working with Steven Legg on establishing a decent mapping between ASN.1 (the standard for directory structures) and an appropriate encoding in XML. Since then I have gone onto other things, but Steven has continued to work away and last week this work reached IETF request for comment status:

        RFC 4910


Title: Robust XML Encoding Rules (RXER)
for Abstract Syntax Notation One (ASN.1)

Author: S. Legg, D. Prager

Status: Experimental

Date: July 2007


Congratulations, Steven!

Tuesday, 10 July 2007

Parsimony in Design

William Clinger writes in the introduction to the revisions to the Scheme programming language standards (my emphasis):
Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary. Scheme demonstrates that a very small number of rules for forming expressions, with no restrictions on how they are composed, suffice to form a practical and efficient programming language that is flexible enough to support most of the major programming paradigms in use today.
It would be great to be able to similarly say:
Applications should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary. Rationale demonstrates that a small number of rules for forming structures, with minimal restrictions on how they are employed, suffice to form a practical and efficient application that is flexible enough to support most of the major decision-making paradigms in use today.
Step 2: Make it so!

Monday, 2 July 2007

Taking stock

I have been working at Austhink now for a little under two years. I am not sure what my employee number is, but it is less than 10.

My job title says "Senior Software Developer", but Austhink is or was a start-up, and at a start-up everyone chips in where-ever they can. What have I actually been up to?

Project Management / Team Leadership
  • I have been the senior hands-on guy in an Agile Team
    • Estimating and clarifying use-cases / user stories
    • Mentoring other team members (and learning from them too), largely through pair-work and mostly daily meetings
    • Working with Andy Bulka -- our Technical Director -- to create and sustain an effective and productive atmosphere
  • I have acted as second-level support
Software Design
I have contributed to the detailed design Austhink's flagship product, Rationale, and associated licensing system. Examples of my touch include my on-going crusade against modes, and searching for simplifications, such as the use of an integrated page-preview instead of a separate print-preview window. And let us not forget the humourous messages that appear on start-up.

I have made many more suggestions and performed many more experiments than have made it into the final product, but I believe that the objective is creativity and net output, rather than high hit rate (but lower output).

Architecture
I have made several key contributions to the architecture of the software including:
  • Researched options and made technology recommendations. E.g.
    • Make: The development of a home-grown graph visualization layer (not using Windows Forms) rather than building on top of a third-party product.
    • Buy: The purchase of DotNetBar's Office 2007-style Ribbon Interface tool rather than using more traditional Windows menus, in early 2006
  • The use of a programming language syntax (Python) for our file-format -- instead of the more obvious choice of XML -- an idea borrowed from Lisp which has provided several dividends:
    • Our API became our file-format, rather than a separate interface
    • Identical format is used for copy / paste
    • Allowed an elegant solution (also mine) to the problem of forward-compatibility (i.e. opening up a file from a later version of the application in an older copy of the software leads to graceful degradation, including an error report)
  • Found a way to combine Windows drag-and-drop with our own system (almost, but not quite seamlessly)
  • Designed a functional-programming style animation sub-system that is being incrementally introduced into Rationale
  • The extension of the Command Pattern with nestable Begin and End "blocks" to solve the problem of placing compound expressions on the undo/redo-stack that effectively solved the problem of making necessarily sequential multi-step actions easily undo-able and re-doable in a simple fashion.. This was a considerable improvement on our previous "solutions" which were slowing development and increasing complexity.
  • The use of digital-signing as part of our licensing-system
Software Implementation
  • Have taught and coached the use of effective techniques such as Test-Driven Development, Design-by-Contract, and Refactoring to generate a fairly robust, efficient and featureful product in quick time.
  • Have written or co-written much of the algorithmic code myself, especially in the advanced aspects of the workspace area (drop-zones, overview window)
  • Have managed to stay out of some areas to allow junior colleagues some freedom to develop without "constant" interference
  • I have found and fixed or worked around several nasty bugs, including one doozy in the Microsoft's .NET framework
Intellectual Property
  • I have helped liaise and collaborated with our PhD student Peter Sbarski on his algorithmic work
  • Invented a means for drawing "organic edges" used in Analysis Mode
  • Invented a repulsion algorithm (in conjunction with Ben Loft)
  • Invented a way to relax the Picture Rail Principle (which Peter has since generalized)
  • Helped draft a patent application
Business Stuff
  • Suggested simplifications to business processes, especially to licensing. E.g.
    • Observed that we did not need to define a separate product from upgrading ReasonAble users, but could use the Coupon system instead
  • Entered the winning suggestion for the name of the monthly Rationale newsletter, Ratatouille
  • Monitored the web for relevant trends (i.e. read reddit and Y-Combinator news :-)
People Stuff
  • Assisted the Technical Director interview for permanent staff and contractors
  • Initiated the tradition of bi-weekly lunches in Lygon Street
  • Contributed to the Rationale mailing list
  • Supplied festive food (e.g. cheesecake) on culturally significant occasions
  • Hosted the 2006 Austhink X-mas party

So that is some of what I have been doing for the last two years. On the personal side I became a father for the second time, received my second-degree black belt in Jiu-Jitsu (and first-degree in Judo), continued to run my martial arts club (now in its third year), started two blogs, and begun to develop a very basic competence in spoken Hebrew.

Next stop is to plan a bit of what I would like to do in the next couple of years.

Goal #1: Completion of toilet training.

Sunday, 24 June 2007

The Medical Model of User Feedback

From a Business Week interview with Clayton Christensen, author of The Innovator's Dilemma:
In The Innovator's Dilemma you warn that the maxim "staying close to your customers" can lead you astray. Wouldn't a cursory reading of the book say "don't listen to your customers?"

You're exactly right. The cursory reading is "don't listen." The deep reading is you have to be careful which customers you listen to, and then you need to watch what they do, not listen to what they say.
The last part of this -- watch what they do / don't listen to what they say -- while perhaps superficially disrespectful is a key part of what I call the Medical Model of User Feedback.

Symptoms and Signs
When a (medical) doctor examines a patient she will usually ask for symptoms -- what is the patient's experience? -- and look for signs -- her own observations.

Generally signs are regarded as the more significant, since the patient is typically neither particularly well-trained at interpretation nor unbiased. This is why watching people is invaluable when tuning software features.

By all means listen to User Feedback and requests, but consider it an early step towards revision and improvement, not the last word.

Thursday, 14 June 2007

Pareto and The Wedding Reception Principle

Lately I have been thinking about points of diminishing returns, and perfectionism.

The Pareto Principle or 80-20 Rule says -- among other things -- that roughly 80% of benefit is derived from 20% of the work. Another statement is, "the first 90% of a task takes 90% of the time, and the last 10% takes the other 90% of the time".

Now, for maximum productivity, one should always pull-up short when the first 80% or -- thereabouts -- is covered, and move on to other tasks. But in practice there will be times when you need to go that last 20%. For example, if you are competing on quality, that last 20% is going to be important at least some of the time.

I have my own principle, The Wedding Reception Principle, which is even more useful than the Pareto Principle. It states that in any broad endeavour, if all aspects are up to a good-standard then people will be struck by the excellence of one-or-two outstanding aspects. On the other hand, if any one thing is sub-standard, that is what people will remember, regardless of whether everything else is exceptionally good.

For example, at a wedding reception, if the food is bad, that will be what everyone talks about, not how great the speeches were or how much fun the dancing was. But if the food was merely ok, and the speeches outstanding, it will be the speeches that everyone remembers and talks about, and their general impression will be positive.

In other words: Make everything good, nothing bad, and a few things extraordinary. And pick those things that you intend to be extraordinary carefully, because you will be fighting Pareto and spending lots of time on them. It would be a shame if they turned out to be relatively unimportant.

Monday, 11 June 2007

Kids and Language

At time of writing my son is almost three-and-a-half, and my daughter is one-and-a-half.

Jake's English is exceptional, but he was slow to get started. Jake learned English vocabulary, structure, and some accent according to the Thomas the Tank Engine method.

He is now inventing words. My favorite so far is ignoying, as in "Daddy, stop ignoying me", to which I sometimes respond: "Jake you're ignoying me, too". I think it's a keeper. I must start using it around work :-)

Ella, on the other hand is more precocious, already saying many words and even a few two word sentences: "Herro Daddy". I have started trying to speak -- and latterly learn -- Hebrew in an effort to get Andrea to talk to the kids in her fluent Hebrew. It works this way: I speak in broken Hebrew and Andrea corrects me, and is reminded to use it a bit more, and apparently feels less self-conscious about her (perceived) lack of vocabulary.

[I am now trying to learn conversational Hebrew, and am enjoying making some progress, although I seem unable to reproduce the guttural "r" for love or money.]

Ella is already saying "Toh-dah!" (thank-you) with aplomb. Jake is more cool, sometimes imitating, but sans enthusiasm, and sometimes saying, "Don't speak to me like that!"

I think that if I keep talking Andrea will keep speaking and Ella will learn for sure. As for Jake, on this issue at least, I intend to continue ignoying him.

Sunday, 27 May 2007

Good User Feedback

Via Fiona (Austhink's Education Coordinator):
  1. Startup screen is great, but how do I get it back?
  2. Spell-check please!
  3. Some well-heeled schools are getting the powerful combo of tablet PCs for staff and all students in conjunction with a projector (in preference to smart-boards)
  4. Text pane open/close button goes missing
  5. Some lap-tops have neither mice nor track-pads, making dragging of maps painful
Observations:
  • Many (most?) secondary-school students kick off an argument map with a question
    • Often encouraged to do so by their teachers
  • Students asking for links between boxes -- other than hierarchical relationships -- within a map
  • Kids are becoming more visual and audial
  • Connection to the synthesizing mind and the disciplined mind in Howard "multiple intelligences" Gardner's latest opus, Five Minds for the Future.
Other positives:
  • .NET is becoming more pervasive (as more products require it)
  • Installation process is smooth

Abstract or Separate

As Rationale evolves we at Austhink -- the designers and programmers -- are again and again faced with a tension between on the one hand simplicity through separation, and on the other hand power through integration.

Each time we add a new feature we face the design decision of whether it should be deeply integrated into existing features, or tacked on somewhat separately. [Of course there is something of a continuum here.]

For example, the ability to add an image to any box generalizes the facility for images which were previously available only for basis boxes (and were in that case compulsory). The generalization has the following consequences:
  1. Basis boxes may now have other images than the pre-defined ones
  2. Basis boxes are now less special than previously
I claim point 2. because whereas previously basis boxes were
  1. Visually distinct (on account of being the only boxes with images)
  2. The only terminal category of box
  3. Had a separate place in the epistemology of argument-mapping
Now that point 1 has been eroded, we are left with points 2 and 3. The argument in favor of these points are that they provide good "scaffolding" to ease the learning of the system, making them good for beginners, so they should be retained.

This is argument is analogous to the following:
Bicycles are difficult to learn to ride on account of their instability, so all bicycles should have training wheels.
Of course, in the case of bicycles we allow the training wheels to be removed, and we provide tricycles for small children and even for adults with limitations to their balance or who failed to learn to ride a bicycle sans training wheels when young.

So, when examining simplicity vs. power trade-offs the bicycle metaphor may be a good source of inspiration.

Tuesday, 22 May 2007

The 10 Comapments

Today is Shavuot, which marks the giving of the 10 commandments to Moses (photo) and his homies.

To celebrate this occasion there we will be cheesecake for morning tea. Why cheesecake?

And to demonstrate that Shavout is not just about artery hardening goodies I have constructed a comparison chart of the original 10 commandments and a respected modern interpretation.



The Ten CommandmentsThe Ten Comapments
Given toMosesDan
ByGodSome dude
OnStone tabletsA slightly soiled napkin
WhereAtop Mt SinaiOutside a small cafe in Carlton
1I am the Lord your God who brought you out of slavery in Egypt.I am RationaleTM your mapping tool who freed you from the bonds of confusion.
2You shall have no other gods but me.You shall have no other mapping software but me*.
3You shall not misuse the name of the Lord your God.You shall not omit the little TM symbol.
4You shall remember and keep the Sabbath day holy.You shall have a nice break between mapping exercises.
5Honor your father and mother.Pay your subscription / buy the upgrades.
6You shall not murder.You shall not own a Mac.
7You shall not commit adultery.You shall follow the Holding Hands rule, but that's all!
8You shall not steal.You shall use many sources, and give references.
9You shall not bear false witness against thy neighbour.You shall not construct defamatory example maps about your colleagues.
10You shall not covet.You shall not ask for too many new features at once.


*And bCisive.

Thursday, 17 May 2007

A suspect for murder

Joseph Laronge presents a useful comparison of two different styles of mapping applied to the problem of whether Bob is a suspect for murder.

The first is an argument map:

and the second uses Laronge's own "path-mapping" conventions which he calls "pyramid style":

I have taken the time to do my own argument map, which in a way which gives I think the best of both worlds:
  • The structure is largely taken from Laronge's particular path map, but
  • The co-premises are pulled out, making it easy to show where the reasoning and evidence are open to challenge

It looks like argument mapping in a legal context is ripe for advancement. It will need people with skills in both mapping and the Law to work together to figure out how best to do it, both in terms of refining the method and conventions, and developing a sufficiently rich visual language.

In my example I would like to have been able to indicate through a strong visual device the following "legal concepts", which are somewhat implicit at present:
  1. A piece of cited law
  2. Which which "side" is favored by each premise
This could be accomplished in a few ways, but I will not go into that point at this stage. I am amore interested in finding out what else deserves to be reified in these kinds of maps.

Ideally, once the visual language is sorted out it should be possible to provide "road-maps" and templates that can be readily molded to reflect a particular case.

A big job indeed!

Wednesday, 16 May 2007

Clumping: The Missing Mode

Rationale -- the product that my working life revolves around -- currently has three mapping "modes": grouping; reasoning; and analysis.

Reasoning mode is for loose informal reasoning:


To tighten up the argument I use analysis mode, which allows me to break out the hidden premises:


Grouping mode allows me to create tree-like structures for non-reasoning purposes:


So while the reasoning and analysis modes provide specialized support for argument mapping, it is grouping mode that one turns to for all other tasks. I.e. Less sophisticated = more general.

And through the magic of our powerful "morphing" facilities it is possible to start working in one mode and then have Rationale convert the map into either of the other modes almost instantly. Very useful when you find yourself in the wrong mode!

So what's the missing mode?


In fact, it is possible to do reasoning in grouping mode, just without quite as much constraint as in reasoning mode, because fundamentally both of these modes manipulate "tree" structures.

What's missing is a mode which generalizes analysis in the same way that grouping
generalizes reasoning. Let's call this new mode clumping or clustering.

I am not quite sure what the applications will be, but I am confident that they will emerge.

Monday, 7 May 2007

Spare Cycles and cyber-Citizenship

Chris Anderson points out that people seem to have an awful lot of time on their hands, or "spare cycles" to write blogs, author open-source and free software, write and edit articles for Wikipedia, etc.

In the 21st century it looks like this kind of volunteer-ism is the new Citizenship.

Naturally the amount of spare cycles available to the individual must vary. Amusingly, a commenter accuses Anderson of not having kids, but he replies that he has -- gulp! -- four young children!

My comment is that these are engaging, creative, altruistic efforts to which people are donating their spare cycles, and such endeavours give you a warm inner glow and beget more energy. Hence they benefit both the individual and society.

And -- up to a point -- they benefit employers too because the energy induced in the individual by this kind of engagement washes into the rest of the employee's day.

Of course the challenge for the deeper-thinking boss is how to get an even-higher level of interest and engagement in the official work than can be found in cyber-Citizenship. On the other side of the fence are the social-website entrepreneurs who are after those spare cycles for their own enterprises.

Thursday, 3 May 2007

Tipping Points and Diminishing Returns

I used to have a motto: "If something is worth doing, it's worth over-doing!"

While I have got older and have somewhat resiled from that degree of extremism I must concede that my younger self had a point. Here's why: Consensus is boring. If you want to be creative in your examination of issues and ideas, do not be content with looking at both sides of the coin. You will also need to turn it on its side, cut it in half, dye it blue, and compare it with similar coins, learn its history, and, ..., you get the idea.

In the end you may come back to a fairly uncontroversial consensus, but you will have done so with a thorough understanding of the available options. You will have learned interesting stuff, and have acquired an understanding rather than a pre-digested second-hand overview. This will enable you to make decisions or recommendations from a solid base.

Application to Software Design
A feature request comes in or -- more likely -- becomes top priority. You decide to implement it in its simplest form. Is it actually useful? Or is it simply the case that it could be useful? In the latter case this is a token feature. Maybe it could become useful with further work, maybe it should be left in for a while (to find out for sure).

If a feature turns out to be fairly unused and is not the subject of ongoing then surely it is a candidate for removal. Of course this will annoy the 2% of the user-base who do use it, so the usual practice is to baulk at this degree of ruthlessness, and leave it in, or merely deprecate it (i.e. "we plan to remove it, say in 20 years time"). If removal would break backward compatibility, we tend to be very reluctant to remove a feature.

So maybe we did introduce the feature to serve a genuine need, but have not yet gone far enough. Perhaps there is a Tipping Point where with sufficient integration and polishing a feature or feature-set suddenly becomes compelling.

So again it becomes a case of tinkering, polishing, experimenting and, reflecting to get there: maybe incrementally; maybe with a sudden jump.

This approach risks that you overshoot and pass the point of diminishing returns, or end up throwing good money after bad. But, your guts should tell you when to quit. I contend that, having committed to adding a feature, it is a greater danger to do a bit too little than to do too much.

In golfing terms: It is better to push a putt long rather than leave it short.

Practical exercise: Review your software and look for partially realised features. Mark them for removal or improvement.

Question: Given limited resources, is it better to have a smaller, thoroughly-realised feature-set, or a larger, partially-realised set?

Essays worth mapping

When I get some spare time -- ha! -- I plan to produce argument maps of some of the better essays (or excerpts thereof) that I can lay my hands on. Here is a list of a few that I came across recently:

Wednesday, 25 April 2007

Quantifying the Cost of Feature-Creep

A naive person might think that the cost of building a software product with 100 features would be roughly 100c, where c is the average cost of a new feature. Unfortunately the real cost is proportionate to at least the square of the number of features, and may be much worse. Here's why ...

A Holistic View
No feature is an island. It interacts with some of the other features.

An easy way to appreciate the consequence of this is to imagine that the software product is being constructed incrementally, feature by feature. [In fact, this is not a bad approximation of reality.]

Each feature may have an interaction with a pre-existing feature. Therefore not only do we need to design and implement the new feature, we need to determine whether it interacts with each of the pre-existing features, and possibly modify them to co-exist with the new feature.

The cheapest case
In the cheapest case -- I won't call it the best case for reasons which will be described below -- all features are independent, and we simply pay the cost of being careful, i.e. checking that the new feature is independent of all the other features.

So the interaction cost is 0 + 1 + 2 + ... + n-1, for n features, which anyone who knows the famous story about Gauss can tell you is proportionate to n2.

So the overall-cost is n * average_isolated_cost + n2 * average_consderation_cost

The priciest case
What if the features are not independent? Not only do we incur the cost of modifying code associated with pre-existing features, but this may trigger a cascade:

Adding the nth feature may require a revision of code associated with the n-1th feature, but this code may also have supported all the other previous features! For example, disregarding the design-level interaction between feature n and feature n-2, there may be an additional indirect coupling via feature n-1. And there may be more distant indirect interactions too.

So in the worst case, adding a feature involves adding the new feature, modifying the existing features to account for direct interactions, and modifying existing features to account for indirect-interactions.

Note to the reader: Please let me know if you figure out a good upper-bound is: My hunch is proportionate to the factorial of the number of features.

Why the cheapest case is not the best case
In the cheapest case all features are independent. But in a cohesive software product you expect features to interact; so you would not want to design a product with this characteristic.

On the other hand you probably do not want all features to interact, because the end-result would seem overly dense, and consequently very difficult to learn to use.

Take home message: The degree of likely coupling of features is a consideration in both usability and cost.

What to do: Software Developers
The thought-experiments above do not reflect how developers actually determine how new features interact with existing features. However, determining these interactions is important. The following techniques may be of help:
  1. Reflection: A developer with a good theory of the product will be able to diagnose some interactions by thought, white-board, and poking around.
  2. Good Test Coverage: A good set of tests (e.g. built using TDD) will be of help in showing by failures of existing tests where a new addition interacts deleteriously, giving clues as to problems.
  3. Design by Contract: DBC-style assertions will give even better locality information than a test by showing where in the code the violations of old assumptions occur.
Of course, good abstraction and modularity of the code-base help too.

What to do: Software Designers
Of course the big message is to designers:

Choose your features carefully

The cost increases with the size of the product, so "just throwing things in", is a policy which will lead to great cost later. You can reduce this cost by attempting to do more with less: Aim for smaller feature-sets that do more with less.

Good luck!

Do you want macros with that?

From an introduction to Scheme by Ken Hickey (my emphasis):
Just as functions are semantic abstractions over operations, macros are textual abstractions over syntax. Managing complex software systems frequently requires designing specialized "languages" to focus on areas of interest and hide superfluous details. Macros have the advantage of expanding the syntax of the base language without making the native compiler more complex or introducing runtime penalties.
Beautifully expressed. I say, "just give me the power, already", which most programming languages -- with honorable exceptions -- withhold.

Probably not everyone should be playing with this kind of power, but while I do not allow my 3-year-old son light the candles, I hope that one day he will have the maturity and coordination
to do the job. At that time I assume that matches will be available.

Thursday, 19 April 2007

Programming is Theory-Building

I was exposed to the notion that the activity of programming is intimately tied up with a hidden process of constructing a theory in an excellent Google Video talk by Peter Seibel, author of Practical Common Lisp. It appears that this notion has its origin with Peter Naur -- one half of the "Backus-Naur form" -- in his article Programming as Theory Building. Naur writes:
[P]rogramming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand. This suggestion is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program and certain other texts.
This insight has ramifications for the anthropology and life-cycle of large projects, where in order to produce modifications to an existing program without compromising the structure the programmers must understand the underlying theory. Since such an understanding is not easily acquired, we have Fred Brooks' observation that adding more manpower to a late project makes it later as a corollary: The people who understand the theory have to induct the newcomers, thus soaking up both groups' productivity.

Having had the experience myself of being brought late onto an existing project, I can appreciate first-hand how hard "learning the theory" can be. At other times I have developed sophisticated algorithms that I don't think anyone else who came after was game to touch!

Understanding and acknowledging this element of software development should be helpful in managing large-scale and long-term developments. Of course, since it accepts the human side of programming, no simple "solutions" follow, but its implications for quality assurance, recruiting, and -- as Naur points out -- the status of programming as a profession, are far-reaching.

Making a Splash

As we were counting down to the release of Rationale 1.3 -- later today, fingers crossed! -- a request came in to make the splash-screen more visually prominent and add less static.


The prominence was increased by adding the green boundary, making it look like a Reason box, and seems to work well, although it was painful and frustrating process getting the rounded corners properly cropped.

Now, instead of using a graphic status bar, or having messages indicating where the program was up to in the still-unfortunately-quite-lengthy process of loading, I thought that it would be fun to inject this dead-time with a bit of mildly subversive humour. To my surprise, this proved popular at Austhink, and will be included in the release.

The messages that we show -- like the "Colouring in boxes" shown in the image -- are drawn from common logical fallacies, plus other suggestions from around the Austhink Office.

Motivation: Giving an estimate of time remaining -- however accurate -- or revealing details of the internal load-process is of very little utility or interest to the user. She wants it to load fast, and reminding her of how long it is taking, or what's going on is kind-of useless.

Alternative: We could have used it for advertising, but that's kind of lame. Instead, we list a whole lot of stupid things, mainly logical fallacies. This may be ignored (even some people at Austhink didn't get the joke at first!), mildly amusing (thereby improving the user's mood), or even inspirational (if she reverses some of the suggestions).

Here's the current list:
  • "Appealing to authority"
  • "Appealing to emotion"
  • "Appealing to common sense"
  • "Preparing silly questions"
  • "Searching for a biased sample"
  • "Reducing clarity"
  • "Reddening herrings"
  • "Lowering the bar"
  • "Making the same mistake twice"
  • "Reducing absurdities"
  • "Begging for questions"
  • "Launching ad hominem attacks"
  • "Biasing samples"
  • "Pretending to listen"
  • "Entering special pleadings"
  • "Finding middle ground"
  • "Ignoring whatever is most important"
  • "Colouring in boxes"
  • "Confusing fact for opinion"
  • "Creating contradictions"
  • "Generating inconsistencies"
  • "Contemplating inconsistencies"
  • "Building straw men"
  • "Burning straw men"
  • "Sliding down a slippery slope: Wheee-ee!"
  • "Searching for inferior alternatives"
  • "Reinforcing bad habits"
  • "Mistaking substance for appearance"
Technical note: Rather than shuffle, or show them in the same order, Rationale chooses a random starting place on the list each time it runs.

Composing new messages is a bit like inventing haiku. I look forward to seeing how the Rationale user-base reacts to this "feature", and whether they start sending in their own suggestions.

Tuesday, 10 April 2007

Why I abstract

My boss jokes that programmers would rather build a tool to implement a feature than just implement the -- insert suitable expletive -- feature.

While this may seem like an occupational hazard, I prefer to look on it as an occupational pre-requisite. If you are not noticing the patterns in what you are doing, then the chances are you are doing the same thing over and over. And if you notice, but fail to act -- by building a tool , or a library, or a whatever to encapsulate the pattern -- you will start to get frustrated.

Here's what Richard Hamming -- see my last post -- had to say about the emotional aspect:
I was solving one problem after another after another; a fair number were successful and there were a few failures. I went home one Friday after finishing a problem, and curiously enough I wasn't happy; I was depressed. I could see life being a long sequence of one problem after another after another. After quite a while of thinking I decided, ``No, I should be in the mass production of a variable product. I should be concerned with all of next year's problems, not just the one in front of my face.'' By changing the question I still got the same kind of results or better, but I changed things and did important work. I attacked the major problem - How do I conquer machines and do all of next year's problems when I don't know what they are going to be? How do I prepare for it? How do I do this one so I'll be on top of it? How do I obey Newton's rule? He said, ``If I have seen further than others, it is because I've stood on the shoulders of giants.'' These days we stand on each other's feet!
Of course by taking the step back you learn more and do better work. Over-do it and you get analysis-by-paralysis or build a tool that does not solve the immediate problem. Under-do it and you are condemned to mediocrity. It's a Goldilocks thing.

When I have to think about a new design problem I like to explore the extreme possibilities. I know that the best one will lie somewhere in the realm of compromise, but it's exciting to explore the fringes. And often that is where the surprises and the learnings are.

The standard questions I ask -- in no particular order -- are:
  1. Is this a special case / generalization of something else?
  2. Is it similar to something else?
  3. If I can solve this can I apply it to another issue?
  4. What are the obvious approaches?
  5. What are the non-obvious approaches?
  6. Which is the simplest approach?
  7. Which is the most elegant approach?
  8. Does anyone else have any ideas about this (among my colleagues)?
  9. What does the literature / web say?
  10. Where does this lead?
  11. How does it interact with existing features?
  12. What's the underlying question / What's the real need?
  13. What are the corner cases?
  14. How would a functional programmer approach this?

Monday, 9 April 2007

Tolerance of Ambiguity

"If you think learning is hard, try unlearning."

In his wonderful speech -- You and Your Research -- the great computer scientist Richard Hamming has the following to say on the subject of ambiguity (my emphasis):
There's another trait on the side which I want to talk about; that trait is ambiguity. It took me a while to discover its importance. Most people like to believe something is or is not true. Great scientists tolerate ambiguity very well. They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. If you believe too much you'll never notice tPublishhe flaws; if you doubt too much you won't get started. It requires a lovely balance. But most great scientists are well aware of why their theories are true and they are also well aware of some slight misfits which don't quite fit and they don't forget it. Darwin writes in his autobiography that he found it necessary to write down every piece of evidence which appeared to contradict his beliefs because otherwise they would disappear from his mind. When you find apparent flaws you've got to be sensitive and keep track of those things, and keep an eye out for how they can be explained or how the theory can be changed to fit them. Those are often the great contributions.
It's almost as if the revolutionary thinkers have a much-more fine-grained mental model of the world, once which allows them to cross the chasm of the unknown that lies between the current understanding to a "replacement theory".

Tuesday, 3 April 2007

Bootstrap your three-year-old

My boss is doing research into how young children acquire logic with his three-year-old daughter.

My similarly aged son is at the "Why?" stage. Whenever I explain anything he will ask "Why?". The follow-up explanation elicits a second "Why?", etc. It is an effective mode of interrogation.

Following this method we swiftly reach the depth of my knowledge or the limits of my patience and I say, "Why do you think, Jake?", and he replies "I don't know!", and I say "Neither do I!", and then I go back to whatever I was doing and he goes playing with his trains or doing something dangerous with his little sister.

Sometimes Jake does not restrict himself to a single initial "Why?" and instead says "Why? Why? Why? ... Why Why!?", there-by saving himself the trouble of timing the additional "Why?"s in our dialogue, and I -- like to think -- identifying him as a passionate enquirer.

However, I am working on better, clearer, simpler answers. The challenge is to not use abstract or unfamiliar concepts in the hope that some of what he acquires is not just rote-learning.

Example: I stayed home to look after the kids on Monday while my partner went away for a two-day work-retreat. After we waved her good-bye Jake asked, "Where has Mummy gone?", and I explained that she had gone to "work camp", which was similar to the school camps that his older cousins sometimes attend.

Another example from breakfast that day:

Jake: Daddy, do you know that five and five is ten.
Daddy: Yes. And did you know that two and two is four?
Jake: Why?

Fortunately, at this instant I had just cut a passionfruit in half. I quickly halved a second one, lay the knife in between the two pairs.

Daddy: How many pieces are on this side of the knife.
Jake: One, ... two!
Daddy: And how many are there on this side?
Jake: Two!

Daddy whips away the knife.

Daddy: Now how many are there?
Jake: One, two, three, ... four!
Daddy: That's why.
Jake: [silence]


Thursday, 29 March 2007

Book ideas

Ideas for books to write:

"Make me think: The School for Genius":
  • A guide to figuring things out for yourself
  • Learning philosophy:
    • discovery learning
    • learning by doing
    • drawing connections
    • open-ended
    • verification (how do I know that's right?)
  • Guides the reader through the essence of several subject through hints and puzzles
  • Unconventional approaches
  • Suitable for bright teens & life-long learners

Don't turn to the back of the book

Sometimes laziness has strategic advantages. When I was doing my PhD I did my literature review very late in the piece. This had the advantage of not contaminating me too much with existing ideas early on. And when I did get around to it I knew enough from my own trials and tribulations to be able to read the literature intelligently and critically.

I had a similar experience while teaching myself Operations Research in my first significant job in Industry. I had done a one-semester course -- not long enough to learn too much -- and so I was able to produce the real-life problems with a degree of freshness, rather than trying to (mis-) apply the known "solutions".

Generally speaking, I advocate mastery of fundamental ideas and techniques. These are often the most portable and adaptable. By contrast the advanced techniques can be quite specific, like an organism that has a evolved to fill a very narrow niche.

Turning to the back of the book, listening to lectures etc. may seem faster, but taking the slow hard road is a richer path. And in saying this I am in excellent company ...

Clearly Feynman was of the "no, don't tell me the answer" school of learning":

The deal this time was that Feynman would teach Fredkin quantum mechanics and Fredkin would teach Feynman computer science [27]. Fredkin believes he got the better of the deal:

`It was very hard to teach Feynman something because he didn’t want to let anyone teach him anything. What Feynman always wanted was to be told a few hints as to what the problem was and then to figure it out for himself. When you tried to save him time by just telling him what he needed to know, he got angry because you would be depriving him of the satisfaction of discovering it for himself.’
and
Feynman constantly emphasized the importance of working things out for yourself, trying things out and playing around before looking in the book to see how the experts’ have done things.

From Richard Feynman and Computation [pdf]

Tuesday, 27 March 2007

Books Wishlist

Here's a list of gift ideas for me.

Let me know if you commend or hate anything on the list, or if you want to send me a copy!

Computing
Writing
  • Goldberg, Writing Down the Bones, "Wherein we discover that many of the 'rules' for good writing and good sex are the same"


More to come ...

Monday, 26 March 2007

Aesthetics, Programming and the Sex Link

How to code like a girl prompted this post.

My (male) friend’s piano teacher told him:

“If your hands do not look beautiful, your music cannot sound beautiful.”

The teacher was female.

Do aesthetics play a big part in programming? Absolutely.

Clearly the importance of aesthetics has not triggered a rush of women into the profession of programming.

Perhaps – as in mathematics -- it is something to do with the abstract nature of the beauty involved?

I know that my partner -- whose hobby is making patchwork quilts (geometry) and whose addiction is doing Sudoku and Kakuro (NP-complete puzzles) -- is not interested in making the abstraction leap into hobby-programming in Prolog or Scheme. I, on the other hand, have made one quilt, solved one Sudoku and one Kakuro, and that was enough.

I did have some success getting my then seven-year-old niece interested in Logo Turtle programming, but this was compromised by her being "taught" computers at school. Maybe my daughter (aged 16 months) will be a better bet in a few years time?

Interesting Solutions to SICP Chapter 1

I have started working through SICP chapter 1, using the Dr Scheme interactive environment. I will post my more interesting solutions and comments.

Ken Dyck gives a more complete solution set.

Comment: Rather than finding the larger two explicitly, I find the smallest and use a simple mathematical identity.

My solutions:

Exercise 1.3. Define a procedure that takes three numbers as arguments and returns the sum of the squares of the two larger numbers.

(define (f a b c)
(define (lt a b) (if (> a b) b a))
(define (least a b c) (lt a (lt b c)))
(define (square x) (* x x))

(+ (square a)
(square b)
(square c)
(- (square (least a b c)))))
Comment: Rather than finding the larger two explicitly, I find the smallest and use a simple mathematical identity.

Exercises 1.17 & 1.18: Devise recursive and iterative algorithms for multiplication that take a logarithmic number of steps.
(define (mul-recursive a b)
(cond ((= b 0) 0)
((even? b) (mul-recursive (double a) (halve b)))
(else (+ a (mul-recursive a (- b 1))))))

(define(* a b)
(define (mul-iterative r a b)
(cond ((= b 0) r)
((even? b) (mul-iterative r (double a) (halve b)))
(else (mul-iterative (+ r a) a (- b 1)))))

(mul-iterative 0 a b))
Comments: Observe how similar the two solutions are. The iterative solution is especially easy to follow if you expand out an example.

Good Traits in a Software Developer

Over at The Bleeding Edge Purumu has blogged about 5 traits of a good developer:

1. Curiosity
2. Good analysis skills
3. Patience
4. Abstract Thinking
5. Communications

He suggests that having these traits trumps knowledge of a particular technology and I heartily agree. Technologies change -- oy do they change! -- but these traits are key ingredients for valuable team-members in the software development biz.

I could add "initiative", "originality", "sense of humour", and "commitment to quality", but that might make the list a bit long!

To what extent you can "put in what God left out" is a re-hash of the eternal nature / nurture debate, but I think that it is clear that given some of these traits, and willingness to learn, it is possible to train at least some of them up further.

Now, do these traits have a role to play in hiring, reviewing and professional development?

I think so. Joel Spolsky recommends hiring "smart people who get things done", which is also sound advice, but this breaks it down a bit further.

On any team it's good to have people who are particularly strong in the various traits, making the team very strong overall. But is better to enhance one's own weak points or further develop one's strong points? Personally, I favor evenness of development, which goes towards making people into stronger generalists rather than specialists.

Why? It increases flexibility, so that the individual can tackle a greater variety of tasks and in particular complex tasks which require multiple abilities, which I believe will help keep job interest high. From a team perspective there is greater overall capability and flexibility in task assignment.

Now, it's not that I am against people have personal specialties, it's just that I think that these will emerge naturally out of personal interest, while it can take a little bravery -- and encouragement -- to develop in which one feels below par (and may have avoided working on for this reason).

Ultimately, development of traits such as these may shape roles and career paths. It all makes good food for thought.

Questions:
1. What kind of activities can enhance these traits?
2. Are these traits measurable or quantifiable?

Sunday, 25 March 2007

Let's Get Functional

Last week I watched Harold Abelson's introductory SICP lecture, courtesy of Google Video. SICP is short for Structure and Interpretation of Computer Programs, which is a much copied course given at MIT, and the title of Abelman and Sussman's now classic introductory text to Computer Science. Besides the video lectures, the complete text is available from the above link online. Thank-you!

Talk about insightful! This is what Computer Science is really about. Probably more advanced than most undergraduates can cope with, I am taking the time to look at these classic texts because I suspect that functional programming (FP) is about to become much more important.

Why? Besides the intrinsic pleasure in learning and extending oneself, multi-core CPUs are starting to go main-stream, and I believe that FP is our best hope to take advantage of a new age of parallel hardware.

So I plan to brush up by going through some / most of SICP (and doing the exercises), before moving on to one or more of Haskell (especially for monads), OCaml, and F#.

SICP is draws many examples from mathematics, so with my background it looks very inviting. Others may prefer How To Design Programs, another freely available text.

* * *

Abelson says that when you look into a computer language, you should ask:
  • What are the primitives?
  • What are the means of combination?
  • What are the means of abstraction?
The primitives are the "building blocks"; the means of combination are the ways of putting the primitives together; and the means of abstraction are ways of building composites (out of primitives and other composites) when the primitives are insufficient. If the composites are then indistinguishable from the primitives, you have something truly powerful.

This prompted an interesting thought the following day:
Do these criteria also apply to computer applications?
More on this later ...

Wednesday, 21 March 2007

How programming and its theory can expand the mind

From the preface to Simply Scheme (my emphasis):

There are two schools of thought about teaching computer science. We might caricature the two views this way:

The conservative view: Computer programs have become too large and complex to encompass in a human mind. Therefore, the job of computer science education is to teach people how to discipline their work in such a way that 500 mediocre programmers can join together and produce a program that correctly meets its specification.

The radical view: Computer programs have become too large and complex to encompass in a human mind. Therefore, the job of computer science education is to teach people how to expand their minds so that the programs can fit, by learning to think in a vocabulary of larger, more powerful, more flexible ideas than the obvious ones. Each unit of programming thought must have a big payoff in the capabilities of the program.
I subscribe to the radical view, although my Computer Science education is more of a self-education, so I would say that it is a case of learning to expand one's mind appropriately.

Now, a further benefit of learning to program at increasingly high levels -- beyond the ability to deliver bigger and better programs -- is the accompanying mind-expansion. That this will have benefits in other areas of thinking and doing was a key thesis of Seymour Papert, and motivated his work with Logo etc., in the foot-steps of Piaget.

Of course other disciplines may have similarly transferable benefits, but computer science / programming seem to be under-rated in the wider community in this respect.

For example, when I started a law degree -- later abandoned -- it seemed to me that a background in programming would be very helpful to whoever was drafting statutes. Thus far I believe that there is no such trend.

Contrast this with the preponderance of lawyers in parliament!

Tuesday, 20 March 2007

Inspirational Quotes

I wake up every morning determined both to change the world and have one hell of a good time. Sometimes this makes planning the day a little difficult.
E. B. White

Thursday, 15 March 2007

What Would a Functional Progammer Do?

As C# adopts features from functional programming languages it is becoming increasingly possible to program in an-least partially functional style as part of your day-job.

At time of writing I am using C# 2.0, which has reasonable support for lambda functions in the form of delegates, but they are ugly, because of the presence of strict-typing / absence of type-inference. You need to think things out functionally and then translate into the clunky C# 2.0 notation.

Things will get better with C# 3.0 and a host of functional-inspired features, including limited-type inference and cleaner notation to support LINQ.

Anyway, yesterday I was pairing with a colleague working on implementing some experimental enhancements to some core algorithms, and we were running into difficulties. Because the implementation was in C# we were thinking in an imperative way, and after running through a few options and hitting various dead-ends I asked Peter, "What would a functional programmer do?"

We were working on variations of an in-place algorithm, where a functional programmer would tend to write -- wait for it! -- a pure function. In our case a more functional approach was going to be a lot easier. Our context had blinkered us. And once you take a step back and put the functional hat on, other avenues open up.

So, here's a short list of things a functional programmer might do, and which are worth thinking about when you're trying to solve a programming problem:
  1. In-place modification or pure function?
  2. Higher-order functions: Does treating code as data lead to simplifications?
  3. Can monads help?
Of course the amount that you can do may depend on language support that is present. Item 5 is a reminder to get on top of the monads concept: "If you can't understand it, you won't recognize when you can use it!"

All of this is in aid of the ideals of better abstractions, clearer and more understandable code, leading to higher quality and greater satisfaction and success.

There's a paradox that one may need greater understanding to express things more simply, and that for someone with less understanding the higher. clearer expression may be incomprehensible. But that's really old news: Look at the history of science and mathematics. Example: Want to know the area under a funny looking curve? Easy if you know calculus. Don't know calculus? A simple explanation may be hard to come by!

Now for the big question:

If Jesus were alive today, would he use a functional programming language? Who knows?

But Moses would, for sure.