On individual projects I have usually found that a re-write leads to smaller, cleaner, faster solutions.
I attribute this to learning acquired from the previous attempts/versions, which I can incorporate in the form of improved abstractions and better trade-offs subsequently.
Some re-use of "golden nuggets" from earlier iterations may also be possible, and is certainly desirable.
Some lessons learned can be incorporated incrementally through re-factorings, but there are times when incremental improvement takes you to a local maximum and traps you there.
On big commercial projects other considerations come into play. Until the new code-base is up you need to contend with the cost of parallel development, and this period will be longer the greater the legacy. Unless, of course, your new abstractions are brilliantly efficient, and/or you can cut away a lot of stuff that was not needed.
Once a project is sufficiently large, given finite resources, it may eventually be too late to ever re-write!
Here's some more useful discussion by Adam Turoff on these issues prompted by survey question by Ovid.
Quantum Computing – An Update
4 weeks ago
No comments:
Post a Comment