Posts Tagged ‘development’

Time Estimation in Software Development

Sunday, March 8th, 2009

Time estimation of software development efforts is usually wrong, often underestimating the time required to complete tasks. Here are some contributing factors leading to under-estimation:

  1. Assumption of full 8-hour days to write code
  2. Incorrect or changing scope
  3. Imperfect knowledge of the problem
  4. Unanticipated changes in priorities (e.g. emergencies)
  5. Budgeting time to test, but not time to close issues found in testing
  6. Failing to time-box research
  7. Overoptimistic, assuming nothing will go wrong

Assumption of a full 8-hour day to write code

Your day will have times when you are not working. Paper work, e-mail, meetings, set-up work such as configuring test environments and that extra 20 minutes taken for lunch all take time out of the day for development. Additionally, developers suffer significant penalties due to interruptions that have large expenses as developers get back up to speed. A good first assumption is 20% overhead, or 8 hours out of every 40 is spent on overhead tasks typically involving communication.

Incorrect or changing scope

Perhaps you missed some steps that need to be done? This is why developers like to have requirements written down. It amazes me how infrequently requirements are written and when they are, the statements make glaring assumptions. Additionally, people change their minds, the economic environment changes or new opportunities arise and suddenly, the scope of work changes, often increasing the scope of work (though not the time to deliver).

Imperfect knowledge of the problem

Teams waste work because people skipped the part where they are supposed to figure out what the problem is, because it’s more fun to code. There are many ways to understand the problem: use cases, design documents, user stories, requirements analysis or simply talking to the customer. A common factor in success is a repeatable means to refer to the problem that everyone can go to and arrive at the same conclusion. Though it sounds easy, it’s surprisingly hard and time-intensive to communicate in such a way that eliminates assumptions.

Unanticipated changes in priorities (e.g. emergencies)

People organize for the common scenarios, i.e. business as usual. If everyone knew and was adequately prepared for an emergency, it wouldn’t be an emergency. Those who try may be seen as wasting time and resources, which can be true. Sometimes, short-term business needs re-allocate resources and the best thing to do is actively communicate upwards the effects of changing priorities before, during and after the change. How do you do communicate before the change? I communicate time estimates with the explicit assumption of unchanging priorities. Once they change, I state the effect. You can’t assume everyone knows the effect of the change or even knows about the change.

Budgeting time to test, but not time to close issues found in testing

It may take a week to go through all the tests, but what if bugs are found? First the bugs have to be reproduced, documented, fixed, verified and then, ideally,  the tests start from the beginning and the cycle repeats. Typically, the larger the number of bugs, the more test cycles needed, though complexity of fixes is a better indicator. How many test-fix cycles will you need? It’s a hard question to answer and keeping your software quality up and your technical debt down is the best way to keep the number of test cycles low.

Failing to time-box research

Some problems are not straight forward. Many solutions may need to be evaluated or sometimes you need figure out a new way of doing something. You can generalize developers into two categories: production and prototype. Production programmers will usually take much longer, because they want the ideal solution. Prototype developers take much less time, but their solution usually has limitations. Time-boxing research is a way to set expectation with developers. If you know that developer’s ability, time-boxing is a coarse way to control quality of the design and implementation.


Overoptimistic, assuming nothing will go wrong

Helmuth von Moltke wrote, “No battle plan survives contact with the enemy.” If you account for that, perhaps by giving yourself a time buffer for problems, you can absorb many, but likely not all, setbacks.

Conclusion

On a high-risk project, I convinced the team to account for all of the above when estimating effort for software development. We easily met our goals, so easily, that management accused the team of padding the estimates. Under pressure, we went back to the old way of estimating (essentially ignoring all risk). Projects were delivered late and management changed it’s grumbling from “lazy, padded estimates,” to “lazy developers need to work harder.” In software development, there is a tendency to shoot the messenger (i.e. developer) for accurate estimates. Software is expensive and many companies would rather go into it not knowing that.

Share