Anticipatory design, as coined by Ron Jeffries, describes the extra effort a software developer might do to anticipate new or changing requirements. It’s interesting to attempt to model the cost of doing such work, particularly in light of XP or Agile methodologies. The goal is to come up with a rigorous basis for deciding when to do “extra” design up front.
A key variable in Jeffries model is R, the cost of designing a feature only when it is explicitly needed. Jeffries explores the cases where R ranges from 1 (implying no extra cost) to 5 (i.e. doing the design later is 5 times more expensive).
We extend the treatment of R in two directions: scale and variability. We believe that 5 is by no means the upper bounds on the cost of late design. We also believe that R comes in at least three variations.
We believe that there are actually three distinct class of R: R-normal, R-catastrophic, and R-XP.
R-normal is the R described by Jeffries in which doing the design later incurs a small but noticeable cost.
R-catastrophic is the case in which doing the design later incurs a huge cost.
R-XP is a class surprisingly not described by Jefferies in which doing the design later actually decreases the cost. This would occur in cases where it turn out that a needed component had already been created by someone else (within the group or as a third party purchase/download).
The question becames how to quantify the relative frequency of the three classes R and to see how that would change the overall cost equation. We began by creating a model in which
R-overall = (R-normal * percent occurrence) +
(R-catastrophic * percent occurrence) +
(R-XP * percent occurrence);
We set R-normal to 5 (as per Jeffries), R-XP is set to -5 (i.e. a saving) and R-catastrophic to a huge value such as 1000 (that is the definition of catastrophe after all).
Given these conditions the cost of catastrophic changes almost always overwhelms the effect of the R-normal and R-XP components. Only when the frequency of catastrophic change becomes extremely low or the definition of ‘catastrophe’ is set extremely mild do the other components have a real influence on the overall cost.
The question then becomes: does the practice of delayed design, or XP as a whole, itself influence the occurrence rate of catastrophic design failure? This would seem to have to be true in order to explain the successes being reported for projects using XP techniques. It also seems to be implied by Martin Fowler’s seminal work on refactoring. When discussing design Fowler advocates doing just enough design so that there is confidence that the resulting code would be refactorable. In other words, do enough design to leave the code resilient to change and thus resistant to catastrophe.
The conclusion to draw from this analysis is somewhat against the conventional wisdom. Our analysis indicates that the benefit of XP is not so much that it is faster but rather that it is safer! XP is often considered a high risk strategy and yet our analysis shows the exact opposite. Done correctly, XP leads to just enough design to avoid the castrophes that delay or destroy so much projects.
(portions taken from an unpublished paper written in collaboration with Morgan Creighton, 2001)
No comments:
Post a Comment