Book Chat:Software Estimation

I just finished re-reading Software Estimation: Demystifying the Black Art by Steve McConnell. I was inclined to go back to it after an item at work came in at roughly 500% over our estimate. We thought it would be done two weeks after it started, even with the team only working on it part-time; it wasn’t finished until twelve weeks had passed. A lot has changed in my life and the development world since I read the book the first time, back in 2008. At the time I was doing contracting work, some of which was firm fixed price, so accurate estimates were much more immediately valuable to the company – both in terms of calendar and man-hours. Today I’m doing software product work where estimates aren’t as immediately impactful on the bottom line, but are still necessary to coordinate timing between teams and launch new features.

I wanted to understand how the item had gotten so badly underestimated. With the help of McConnell, I identified a couple of things we did wrong. First, when we broke down the item into its component parts we missed several. Second, we got sidetracked onto other tasks several times while working on the item. Third, even though we had not done similar tasks before we had assumed it would be easy because it was not conceptually difficult. After the fact, each of these points became apparent, but I was looking to understand what we could have done better beforehand.

The three points that caused our delay are interrelated. We missed subtasks during the initial breakdown because we took the conceptual simplicity to mean we fully understood the problem. The longer we spent on the task, the greater the odds that something would come up that needed attention, causing us to spend more calendar time on the task.

While most of the book was on entire project-level estimation, some of the tips and techniques can be applied to smaller tasks. Some of these techniques are immediately applicable to the item we had estimated poorly. The first technique is to understand the estimate’s purpose. In this case, it was so that we could let other teams that depended on us know when to expect the results. The second technique is to understand how the support burden impacts the team’s ability to do work. There was a much higher than normal support burden on our team during this timeframe due to items other teams were working on that would need our input and assistance; we should have anticipated that. The third technique is estimating each individual part separately, which would allow individual errors to cancel each other out. This usage of the Law of Large Numbers would help to remove random error, although it still leaves in systemic error (like our missing some of the composite parts). Since we did one estimate for the item, we had all of the error in one direction. Using these three techniques, we probably would have gotten down to 100 to 200% over the estimate.

100% over the estimate is still bad. The biggest issue in our estimate was that we didn’t stop and fully discuss the task itself, because we thought we had a good enough understanding of the problem. Based on what happened as we went on, it seems like among the team, someone had thought of most of the challenges we encountered, but nobody had thought about all of them, nor had we effectively shared our individual observations. As an example, the person who realized that each test case was going to be extremely time consuming to run was a different person than the one who recognized the need for an additional set of test cases. Therefore, we did not realize the extreme time consumption involved in testing the item until we were already testing. We had not stopped to discuss the assumptions that went into the estimate since we had all agreed closely on what the estimate was. If we had stopped to discuss the assumptions before generating our individual estimates we could have come to a much better final estimate.

McConnell discusses the Wideband Delphi method as a way to avoid this sort of problem. It requires an upfront discussion about the assumptions; if you have an initial agreement on the estimate, someone plays devil’s advocate to ensure that the estimators can defend their estimate against criticism. They take the average of the generated estimates and if the group unanimously agrees on it the average is accepted as the estimate. If not the process continues iteratively until the group converges on an accepted answer. McConnell cites that this method reduces estimation error by 40%. He also suggests using this estimation technique when you are less familiar with whatever it is you are trying to estimate about. He presented other data that showed that 20% of the time the initial range of estimates does not include the final result, but in a third of those cases the process moves outside of its initial range and towards the final result.

Going forward we’re going to make sure to discuss the assumptions going into the more complex estimates. We probably won’t ever do a full Wideband Delphi, but on larger estimates we will definitely be sure to discuss our assumptions even if we are in agreement as to the answer the first time around. That seems like the best balance between time invested and accuracy for us.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s