Estimating Features

In various categories of this blog, we’ve described the role that Features play in defining system behavior at the program level of the Big Picture.

Program Level of the Big Picture

In writing the upcoming  agile requirements book, I needed to pull together various hints and tips I’ve provided on estimating features, into a more comprehensive view. This post is a preview of some of that work. I’m pushing it now so as to elicit comments (any maybe help someone actually estimate features in the meantime).


Estimating effort at higher levels of abstraction (features and epics) builds on the basic estimating model teams used for estimating user stories, so a brief recap may be useful:

  • Teams typically estimate user stories using an abstract, relative estimating model based on story points. Story points can be measured in the abstract (unit-less but numerically relevant) or as Ideal Developer Days (IDDs), with the abstract measure being the most common.

(Note: For simplicity, many teams start with a basic model whereby a single story point is estimated to be one ideal developer day (IDD). Teams often allow 8 IDD’s per two week iteration, leaving two days for planning, demos, and other project and non project related overhead. So for many teams, a story point eventually equates to an IDD, which tends to normalize the team to a common baseline and simplifies cost estimating as well.)

  • The aggregate amount of story points that a team can deliver in the course of one iteration is the team’s velocity. Velocity represents the amount of user value (number of story points) that a team can deliver over any particular time box.
  • When a team’s size changes, or vacations or holidays occur, they simple adjust the expected velocity accordingly.

After some number of iterations, teams will generally have a fairly reliable velocity. That allows them to make intelligent commitments for each iteration. It also provides the basic mechanism we need for estimating at the program level.

Estimating Features

Depending on where the item is in the program backlog, and how important the estimate is, a feature estimate can go through a series of refinements as the figure below shows:

Successive Refinement of Feature Estimates

Preliminary:  Gross, Relative Estimating

For initial backlog prioritization, the product management team may simply need a rough estimate of the effort to implement a feature, even before discussion with the teams.  If so, they can simply use the same, relative estimating model – the “bigness” of each backlog item is simply estimated relative to others of the same type. This provides a simple and fast estimating heuristic that can be used for the initial (but again, only relative) scoping and prioritization of work.

Refined: Estimate in Story Points

At some point, it may be useful to refine the effort estimate by converting the feature effort estimate into story points. With story points the teams can start to reason about potential cost and schedule impact. For this, we need tracking information that allows us to convert features into story points. Fortunately, many agile project management tools support the feature-to-story hierarchy and the teams can leverage this to build some historical data. A simple, comparison of the new feature to the expended story points for a similar size feature provides this first refinement. This is still a fairly gross estimate, as it depends on comparing the new feature to like features for which there is historical data, but at least it’s in a currency that can be used for further analysis.

Final: Bottom-up, Team-Based Estimating

The estimates so far require only a minor time investment and they can be done by the product management team in isolation. That can be appropriate, based on the stage of the feature.

However, for any meaningful estimate, the fidelity of the estimate must be significantly improved by having the estimating done by the teams. In any material program, there will be multiple teams, and they may or may not be affected by the new feature. Sometimes, only they know whether their module, feature, or component is impacted. Therefore, only they can actually determine a more responsible estimate. They will typically have their own history of like features – and the story points it required to complete them – in their project management repository.

However, since ad hoc requests for estimating interrupts the team from their daily iteration activities, the estimating task is most efficiently done on a cadence. Many teams have a periodic  story/feature preview meeting, which is designed, in part, for this purpose.

Estimating Cost

Once the feature has been estimated in the currency of story points, a cost estimate can be quickly derived. While the development teams themselves may not have ready knowledge of the cost a story point, at the program level it is fairly straightforward to calculate one. Simply, look at the burdened cost per iteration timebox for the teams who provided the estimates, and divide that by their velocity. This gives an estimate of the cost per story point for the subject teams affected by the new feature.

Additional work may be necessary when teams are distributed in differing geographic locations, as that can result in a highly varying cost per story point for individual teams.

Estimating Delivery Time

Finally, given an understanding of what percentage of the team’s time the program is willing to devote to the new feature, we can also use historical data to predict how long it will take to deliver. For example, if Feature A was implemented in about 1,000 story points and took three months; and Feature B is a little bigger, then Feature B will take a little over three months, assuming a comparable time allocation.

As a further refinement, the program can also look at the current available velocity of the affected teams, make some assumptions about percentage time allocation, and derive a better time and schedule estimate from there.


That’s it for this post and I’d really appreciate any comments people have on the feature estimating model I’ve outlined above.

The next post will most likely be focused on prioritizing features, or maybe I’ll get back to the kanban model for architectural epics, depending on the wind and the weather.


3 thoughts on “Estimating Features

  1. Trying to estimate the time to do something in a Project has always seemed to be a “dark art” (guessing in the dark?) to me. In a fairly short webinar a BA trainer estimated 8 clock hours for every “use case” discovered. (
    So far, depending on the level its written at, an “informal” systems use case apparently is much like an Agile story or is at a higher level (business) use case with much less detail which may not qualify as an Agile story. [I’m still learning, hope this is not too far off the mark].

  2. Nice post – I definitely agree with the overall perspective. A few minor things to think about though…
    * In a 2 week iteration, a story point of one day is OK for new teams but for high performing teams, it’s probably too large. By the time they are high performing, their largest story might be 2 days because they learn about the danger of leveraging high duration stories (i.e. only one criteria is missed but the story is still failed so they learn to break down stories)
    * Velocity wise, I like to use a few different scenarios based on past history. For example – you can go based on the last iteration, the last N iterations, the average for the release or even a new forecasted velocity due to a new resource or a significant obstacle recently being overcome. The key is that the velocity calculation be easy and something that the team can understand. .The more complicated it is, the harder it is
    * A high performing team, especially if they are very proficient at release planning, will ultimately get to a point where their iteration planning, demo and retrospective takes at most a day and in some cases even 1/2 day
    * The poster that mentioned estimating is a black art is accurate – the more you can simplify the estimating process the better off everyone will be. Most importantly, you have to make it simple for the product management / product owner folks to understand and “trust”

  3. Hi Dean,

    This entry comes late for your book, but it would still be interesting to know your view on it. Your approach is interesting. A question though regarding step 1 – the relative estimating.

    You argue that “the ‘bigness’ of each backlog item is simply estimated relative to others of the same type”, and that product management can do this on their own. I.e. by judging that a new candidate feature is about the size of a previously implemented one, they know how big the new one is.

    But isn’t that a tricky assumption, that product management can actually judge whether such features are about the same size? In the end of the day, to be able to compare you need to know what the technical implications are, and that you don’t know until you involve development.

    If you agree with that, isn’t this first relative estimating a pure guess?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s