Wednesday 25 April 2007

Quantifying the Cost of Feature-Creep

A naive person might think that the cost of building a software product with 100 features would be roughly 100c, where c is the average cost of a new feature. Unfortunately the real cost is proportionate to at least the square of the number of features, and may be much worse. Here's why ...

A Holistic View
No feature is an island. It interacts with some of the other features.

An easy way to appreciate the consequence of this is to imagine that the software product is being constructed incrementally, feature by feature. [In fact, this is not a bad approximation of reality.]

Each feature may have an interaction with a pre-existing feature. Therefore not only do we need to design and implement the new feature, we need to determine whether it interacts with each of the pre-existing features, and possibly modify them to co-exist with the new feature.

The cheapest case
In the cheapest case -- I won't call it the best case for reasons which will be described below -- all features are independent, and we simply pay the cost of being careful, i.e. checking that the new feature is independent of all the other features.

So the interaction cost is 0 + 1 + 2 + ... + n-1, for n features, which anyone who knows the famous story about Gauss can tell you is proportionate to n2.

So the overall-cost is n * average_isolated_cost + n2 * average_consderation_cost

The priciest case
What if the features are not independent? Not only do we incur the cost of modifying code associated with pre-existing features, but this may trigger a cascade:

Adding the nth feature may require a revision of code associated with the n-1th feature, but this code may also have supported all the other previous features! For example, disregarding the design-level interaction between feature n and feature n-2, there may be an additional indirect coupling via feature n-1. And there may be more distant indirect interactions too.

So in the worst case, adding a feature involves adding the new feature, modifying the existing features to account for direct interactions, and modifying existing features to account for indirect-interactions.

Note to the reader: Please let me know if you figure out a good upper-bound is: My hunch is proportionate to the factorial of the number of features.

Why the cheapest case is not the best case
In the cheapest case all features are independent. But in a cohesive software product you expect features to interact; so you would not want to design a product with this characteristic.

On the other hand you probably do not want all features to interact, because the end-result would seem overly dense, and consequently very difficult to learn to use.

Take home message: The degree of likely coupling of features is a consideration in both usability and cost.

What to do: Software Developers
The thought-experiments above do not reflect how developers actually determine how new features interact with existing features. However, determining these interactions is important. The following techniques may be of help:
  1. Reflection: A developer with a good theory of the product will be able to diagnose some interactions by thought, white-board, and poking around.
  2. Good Test Coverage: A good set of tests (e.g. built using TDD) will be of help in showing by failures of existing tests where a new addition interacts deleteriously, giving clues as to problems.
  3. Design by Contract: DBC-style assertions will give even better locality information than a test by showing where in the code the violations of old assumptions occur.
Of course, good abstraction and modularity of the code-base help too.

What to do: Software Designers
Of course the big message is to designers:

Choose your features carefully

The cost increases with the size of the product, so "just throwing things in", is a policy which will lead to great cost later. You can reduce this cost by attempting to do more with less: Aim for smaller feature-sets that do more with less.

Good luck!

Do you want macros with that?

From an introduction to Scheme by Ken Hickey (my emphasis):
Just as functions are semantic abstractions over operations, macros are textual abstractions over syntax. Managing complex software systems frequently requires designing specialized "languages" to focus on areas of interest and hide superfluous details. Macros have the advantage of expanding the syntax of the base language without making the native compiler more complex or introducing runtime penalties.
Beautifully expressed. I say, "just give me the power, already", which most programming languages -- with honorable exceptions -- withhold.

Probably not everyone should be playing with this kind of power, but while I do not allow my 3-year-old son light the candles, I hope that one day he will have the maturity and coordination
to do the job. At that time I assume that matches will be available.

Thursday 19 April 2007

Programming is Theory-Building

I was exposed to the notion that the activity of programming is intimately tied up with a hidden process of constructing a theory in an excellent Google Video talk by Peter Seibel, author of Practical Common Lisp. It appears that this notion has its origin with Peter Naur -- one half of the "Backus-Naur form" -- in his article Programming as Theory Building. Naur writes:
[P]rogramming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand. This suggestion is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program and certain other texts.
This insight has ramifications for the anthropology and life-cycle of large projects, where in order to produce modifications to an existing program without compromising the structure the programmers must understand the underlying theory. Since such an understanding is not easily acquired, we have Fred Brooks' observation that adding more manpower to a late project makes it later as a corollary: The people who understand the theory have to induct the newcomers, thus soaking up both groups' productivity.

Having had the experience myself of being brought late onto an existing project, I can appreciate first-hand how hard "learning the theory" can be. At other times I have developed sophisticated algorithms that I don't think anyone else who came after was game to touch!

Understanding and acknowledging this element of software development should be helpful in managing large-scale and long-term developments. Of course, since it accepts the human side of programming, no simple "solutions" follow, but its implications for quality assurance, recruiting, and -- as Naur points out -- the status of programming as a profession, are far-reaching.

Making a Splash

As we were counting down to the release of Rationale 1.3 -- later today, fingers crossed! -- a request came in to make the splash-screen more visually prominent and add less static.


The prominence was increased by adding the green boundary, making it look like a Reason box, and seems to work well, although it was painful and frustrating process getting the rounded corners properly cropped.

Now, instead of using a graphic status bar, or having messages indicating where the program was up to in the still-unfortunately-quite-lengthy process of loading, I thought that it would be fun to inject this dead-time with a bit of mildly subversive humour. To my surprise, this proved popular at Austhink, and will be included in the release.

The messages that we show -- like the "Colouring in boxes" shown in the image -- are drawn from common logical fallacies, plus other suggestions from around the Austhink Office.

Motivation: Giving an estimate of time remaining -- however accurate -- or revealing details of the internal load-process is of very little utility or interest to the user. She wants it to load fast, and reminding her of how long it is taking, or what's going on is kind-of useless.

Alternative: We could have used it for advertising, but that's kind of lame. Instead, we list a whole lot of stupid things, mainly logical fallacies. This may be ignored (even some people at Austhink didn't get the joke at first!), mildly amusing (thereby improving the user's mood), or even inspirational (if she reverses some of the suggestions).

Here's the current list:
  • "Appealing to authority"
  • "Appealing to emotion"
  • "Appealing to common sense"
  • "Preparing silly questions"
  • "Searching for a biased sample"
  • "Reducing clarity"
  • "Reddening herrings"
  • "Lowering the bar"
  • "Making the same mistake twice"
  • "Reducing absurdities"
  • "Begging for questions"
  • "Launching ad hominem attacks"
  • "Biasing samples"
  • "Pretending to listen"
  • "Entering special pleadings"
  • "Finding middle ground"
  • "Ignoring whatever is most important"
  • "Colouring in boxes"
  • "Confusing fact for opinion"
  • "Creating contradictions"
  • "Generating inconsistencies"
  • "Contemplating inconsistencies"
  • "Building straw men"
  • "Burning straw men"
  • "Sliding down a slippery slope: Wheee-ee!"
  • "Searching for inferior alternatives"
  • "Reinforcing bad habits"
  • "Mistaking substance for appearance"
Technical note: Rather than shuffle, or show them in the same order, Rationale chooses a random starting place on the list each time it runs.

Composing new messages is a bit like inventing haiku. I look forward to seeing how the Rationale user-base reacts to this "feature", and whether they start sending in their own suggestions.

Tuesday 10 April 2007

Why I abstract

My boss jokes that programmers would rather build a tool to implement a feature than just implement the -- insert suitable expletive -- feature.

While this may seem like an occupational hazard, I prefer to look on it as an occupational pre-requisite. If you are not noticing the patterns in what you are doing, then the chances are you are doing the same thing over and over. And if you notice, but fail to act -- by building a tool , or a library, or a whatever to encapsulate the pattern -- you will start to get frustrated.

Here's what Richard Hamming -- see my last post -- had to say about the emotional aspect:
I was solving one problem after another after another; a fair number were successful and there were a few failures. I went home one Friday after finishing a problem, and curiously enough I wasn't happy; I was depressed. I could see life being a long sequence of one problem after another after another. After quite a while of thinking I decided, ``No, I should be in the mass production of a variable product. I should be concerned with all of next year's problems, not just the one in front of my face.'' By changing the question I still got the same kind of results or better, but I changed things and did important work. I attacked the major problem - How do I conquer machines and do all of next year's problems when I don't know what they are going to be? How do I prepare for it? How do I do this one so I'll be on top of it? How do I obey Newton's rule? He said, ``If I have seen further than others, it is because I've stood on the shoulders of giants.'' These days we stand on each other's feet!
Of course by taking the step back you learn more and do better work. Over-do it and you get analysis-by-paralysis or build a tool that does not solve the immediate problem. Under-do it and you are condemned to mediocrity. It's a Goldilocks thing.

When I have to think about a new design problem I like to explore the extreme possibilities. I know that the best one will lie somewhere in the realm of compromise, but it's exciting to explore the fringes. And often that is where the surprises and the learnings are.

The standard questions I ask -- in no particular order -- are:
  1. Is this a special case / generalization of something else?
  2. Is it similar to something else?
  3. If I can solve this can I apply it to another issue?
  4. What are the obvious approaches?
  5. What are the non-obvious approaches?
  6. Which is the simplest approach?
  7. Which is the most elegant approach?
  8. Does anyone else have any ideas about this (among my colleagues)?
  9. What does the literature / web say?
  10. Where does this lead?
  11. How does it interact with existing features?
  12. What's the underlying question / What's the real need?
  13. What are the corner cases?
  14. How would a functional programmer approach this?

Monday 9 April 2007

Tolerance of Ambiguity

"If you think learning is hard, try unlearning."

In his wonderful speech -- You and Your Research -- the great computer scientist Richard Hamming has the following to say on the subject of ambiguity (my emphasis):
There's another trait on the side which I want to talk about; that trait is ambiguity. It took me a while to discover its importance. Most people like to believe something is or is not true. Great scientists tolerate ambiguity very well. They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. If you believe too much you'll never notice tPublishhe flaws; if you doubt too much you won't get started. It requires a lovely balance. But most great scientists are well aware of why their theories are true and they are also well aware of some slight misfits which don't quite fit and they don't forget it. Darwin writes in his autobiography that he found it necessary to write down every piece of evidence which appeared to contradict his beliefs because otherwise they would disappear from his mind. When you find apparent flaws you've got to be sensitive and keep track of those things, and keep an eye out for how they can be explained or how the theory can be changed to fit them. Those are often the great contributions.
It's almost as if the revolutionary thinkers have a much-more fine-grained mental model of the world, once which allows them to cross the chasm of the unknown that lies between the current understanding to a "replacement theory".

Tuesday 3 April 2007

Bootstrap your three-year-old

My boss is doing research into how young children acquire logic with his three-year-old daughter.

My similarly aged son is at the "Why?" stage. Whenever I explain anything he will ask "Why?". The follow-up explanation elicits a second "Why?", etc. It is an effective mode of interrogation.

Following this method we swiftly reach the depth of my knowledge or the limits of my patience and I say, "Why do you think, Jake?", and he replies "I don't know!", and I say "Neither do I!", and then I go back to whatever I was doing and he goes playing with his trains or doing something dangerous with his little sister.

Sometimes Jake does not restrict himself to a single initial "Why?" and instead says "Why? Why? Why? ... Why Why!?", there-by saving himself the trouble of timing the additional "Why?"s in our dialogue, and I -- like to think -- identifying him as a passionate enquirer.

However, I am working on better, clearer, simpler answers. The challenge is to not use abstract or unfamiliar concepts in the hope that some of what he acquires is not just rote-learning.

Example: I stayed home to look after the kids on Monday while my partner went away for a two-day work-retreat. After we waved her good-bye Jake asked, "Where has Mummy gone?", and I explained that she had gone to "work camp", which was similar to the school camps that his older cousins sometimes attend.

Another example from breakfast that day:

Jake: Daddy, do you know that five and five is ten.
Daddy: Yes. And did you know that two and two is four?
Jake: Why?

Fortunately, at this instant I had just cut a passionfruit in half. I quickly halved a second one, lay the knife in between the two pairs.

Daddy: How many pieces are on this side of the knife.
Jake: One, ... two!
Daddy: And how many are there on this side?
Jake: Two!

Daddy whips away the knife.

Daddy: Now how many are there?
Jake: One, two, three, ... four!
Daddy: That's why.
Jake: [silence]