#NoEstimates Part 1 – Doing Scrum Without Estimates

TwitterFacebookLinkedInGoogle+Email

crossroad-signsIntroduction

This is the first in a series of essays exploring the huge topic of estimation within software development projects.

There are many different contexts in which estimates are given, and I am going to try and cover off as many as I can think of in these blogs, but the pattern of my argument will remain consistent: I believe we ought not make decisions in software projects based on estimates and that there are better alternatives for both the suppliers of software products (financially and ethically) and their customers (internal and external). Many of these alternatives are being used in real companies delivering to real customers with great effect.

Given the vastness of the topic, this post focuses purely on the scenario of one Scrum (or other method of iterative product development) team delivering a software product without estimating. Issues of scaling up or down capacity (adding or removing teams) will be covered in a later post about estimating at the portfolio level.

Will we deliver on time?

100_Satisfaction_GuaranteeThis is a question that often gets asked of a software development team at the beginning and throughout a project, and is a key reason why many believe we need to estimate. However, the ironic twist of seeking predictability by making predictions based on guesses is not lost on most people. We all know, or at least suspect, that we’re plucking numbers out of thin air. That we don’t yet know or understand the solution. Or the domain. We comfort ourselves by calling our guesses “educated” or “quick and dirty”, to justify our using them to make important business decisions.

Building software is by its very nature unpredictable and unrepetitive. While building software we cannot easily break down the work into same-sized, repeatable widgets like we can when manufacturing car parts. Unlike car production, the exact product we are building is unknown until we’ve built it, so how can we break the work down into smaller parts up front? One increment of software is not like the next. Software development is a creative, variable pursuit, and solutions are often revealed as we go along. For this reason, fixing scope in software projects is not really possible. Even if it were, it is becoming widely accepted that attempting to do so is undesirable because such an approach does not allow for (or, at least, does not embrace) emergent design, requirements, change and innovation. If we accept that scope is always variable, we must also accept that the delivery date may end up as a moving goalpost while we scamper to deliver what we think is fixed scope “on time” and “on budget”.

So, if it is true to say the concepts of “on time” and “on budget” are usually based on an estimate of how long it will take (and how much it will cost) to build software to meet a fixed set of requirements, rather than a concrete time or budget constraint, it is likely fair to say that we may take longer to deliver the software than we initially estimated. Yes, we may also be quicker than we thought. Or we may get our estimate just right. But, regardless of the outcome, does it actually matter how “correct” our estimates were? Does the act of estimating our work have any impact at all, positive or negative, on the delivery of great software or its return on investment?

Vision is key

VisionTo build software we need a clear vision and shared purpose of what success looks like. When commencing with a potentially valuable software initiative we need well understood high level goals, not the detail of how we will achieve those goals. In true iterative fashion we can then align our just-in-time decisions about how we will improve the product in the next iteration (i.e. what we will build next, aka top items in the Product Backlog) with these goals. I posit that trying to estimate how long it will take to deliver software to achieve one or more high level goals, and then basing real decisions on this estimate, is a questionable approach. Don’t we want our solution and architecture to emerge? Don’t we we want to welcome and embrace changes for the customer’s competitive advantage as the product evolves and becomes more real to the users? These are key principles in the Agile Manifesto and I believe they lie at the heart of a truly Agile approach to building software.

Remove the unknowns

search_of_certainty1Instead of depending on an accurate estimate for predictability we can take away the unknowns of cost and delivery date by making them… well, known. The Product Owner can fix the delivery date based on a concrete budgetary and/or time constraint (e.g. 3 days before the Australian Open starts for the Australian Open app is a concrete time constraint, and “we have to build something for $30,000″ is a concrete budgetary constraint). Within that constraint the team can then fix incremental delivery dates (e.g. end of every Sprint) to allow focused effort on iterative product evolution (it’s not good to have priorities changing every day on a whim) andprovide the opportunity to deliver early and/or under budget. This approach is also useful where there is no concrete budget or delivery date, although the need for interim release dates diminishes if the team (and organisation) is mature enough to have a continuous delivery model.

Estimating sprint velocity is waste

waste_ReductionRather than fix the solution up front (which is required in order to give a “how long” estimate), or make forecasts every Sprint about how many points or stories will get done, I believe teams ought to commit at the outset to building and delivering the best possible product by a given date and/or for a given amount of money. For me, release planning using, e.g velocity (“how many points can we deliver by the release date?”, or “what is our release date given our remaining scope and velocity”) is contrary to an iterative approach (holistic, evolutionary improvement of the product) and is more in line with a purely incremental approach (delivering a pre-defined Product Backlog feature by feature).

When we estimate and use velocity as a planning tool we are making an assumption of how much can get done in a time period. For that information to be useful and meaningful we need to have an amount of stuff in mind that we want to deliver (i.e. a fully estimated Product Backlog). I don’t think it would be too controversial to suggest that all the time (and therefore $$$) spent on estimating backlog items that do not end up getting delivered is waste (at least in the Lean sense).

But what about all the time and $$$ spent on estimating backlog items that do get delivered? To answer that question, I will ask one more question: “Did the PO ever prioritise one story over another based on it having a lower estimated cost (story point size)?” If the answer to this question is “No” then I conclude that all estimating in this context was waste because no decision was made based on the estimates that were given (instead the PO simply prioritised the highest value stories). If, however, the answer is “Yes” then estimates controlled what I believe should be value-based decisions. Estimating a backlog up-front and then release planning using velocity is a cost-based approach. While costs are obviously important in running a software project and, indeed, a business, if decisions are made purely on cost then some of the great software we use and rely upon today (e.g. much of what is made by Google, Facebook, Apple, Yahoo, Spotify, etc.) would never have been built and we would have one explanation as to why there is so much crap, expensive, bloated software in the world.

Iterate, don’t estimate

iterateI believe iterative (Agile) development is 100% about making decisions based on customer and/or business value, using empiricism over guesswork and fixing cost by having a fixed team (a la the Spotify “squad” model) with known timeframes (frequent, predictable release dates as opposed to “deadlines”, which are release dates for “fixed” scope based on imaginary constraints). Knowing our costs and delivery dates gives us certainty which allows us to embrace the delicious uncertainty of building great software.

btw – Having a fixed delivery date doesn’t mean that we will necessarily stop building our product on the delivery date. We may have already stopped or we may choose to continue. What it does mean is that we will continually make go/no-go decisions based on the emergent or potential value of what we are building rather than estimating the cost of a particular solution.

Shift focus to “small”

Big-dog-little-dogFrom the team’s point of view, I believe it is far more valuable to get better at breaking down stories JIT (and only JIT – any earlier is potentially wasteful) to be as small as possible (or, at least, as is practically possible) than to “increase velocity”. For me, a high-performing team has the ability to deliver frequent ”done” increments to the product that can derive immediate feedback and/or potential value for those using it. Clearly the smaller the increments the more frequently delivery can happen, which leads to shorter feedback loops and increased learning and flexibility for the PO to prioritise emergent features over features she originally thought she wanted/needed that have diminished in value, or even take a complete change in direction. This, in my opinion, is far more in tune with true business agility.

The importance of how many stories or points gets delivered in a Sprint becomes truly insignificant when the team is delivering frequent changes to the product and putting them in the hands of users. This, for me, is the crux of why software projects are trying to embrace an Agile approach. But until the estimation stops I believe we’re being held back from true high performance which can deliver awesome outcomes for customers.

Further Reading

aa-reading-owl

TwitterFacebookLinkedInGoogle+Email

14 thoughts on “#NoEstimates Part 1 – Doing Scrum Without Estimates

  1. All famously good stuff. My hands on example is with PayPal and their approach to delivering features needed by the business on short cycles to handle the completely unpredictable demand from the market.

    On our flight avionics systems, the iterations are larger, but the instability of the requirements is the same. The total mission of manned space flight has changed 3 times over the life of the Orion program. Rolling Waves, iterative work packages, 44 day deliverables, astronauts in the same building as the engineers.

    But in the end all projects have a budget and the business people need to know what they are going to get for their budget. PayPal does an annual budget for the operating baseline, then teams commit to development within that budget on fine grained cycles. They are essentially doing what you describe – in principle. Same for flight avionics. Teams get a budget flowed down from NASA for, then partition the work into “chunks” (work packages of similar size), execute those package, measure their productivity (efficiency) and use that to confirm the next rolling wave.

    But in all cases of spending other peoples money (internal for PayPal and external for NASA), some end to end set of needed capabilities must be in place, otherwise no one knows “why” we’re doing this. And an end to end (at least on the budget cycle) estimated cost, otherwise the CFO is not going to sign off on the balance sheet for the committed funds (FASB 86 rules in the states for commercial) and (FAR/DFAR rules for federal funding).

    The #NE approach has yet to address any of these governance issues, which is why for project that spend non-trivial amounts of other peoples money it’s simply a non-starter.

  2. “Did the PO ever prioritise one story over another based on it having a lower estimated cost (story point size)?”

    I get your point on cost-based planning, but I don’t think it’s a bad thing that a story might be de|prioritised in light of the cost of development; sometimes there might be 10 ideas of similar value, but the business will choose to have the one that costs least developed first.

    1. Thanks for your comment, David.

      One of the key issues I see with the situation you are advocating (i.e. using a comparison of estimated dev effort to make a go/no-go decision) is that the PO is trying to calculate ROI on something that I believe is too granular (a story or feature). Not only is this immensely difficult to do with any kind of accuracy, to then dismiss an option because it has been estimated as taking a little longer to develop than another option is foolhardy in my opinion.

      If a few days or even weeks of development cost is so significant such that an option is borderline not worth doing then I would argue that the potential value is not great enough to make the risk worthwhile. For this reason, I think if we are going to estimate at all it should only be at the higher initiative level where it is easier to make an assessment on which options we will pursue. We can also use more effective prioritisation techniques than estimated dev effort, such as cost of delay.

      Another flaw in the approach you advocate is that we do not “know” the cost of development – it is only an estimate (and as we know that is usually *far* from knowing). Even if we did know it for sure, the emergent value the initiative will generate (positive or negative) is even harder to estimate. We should certainly not assume that the ‘R’ part of ROI is going to remain consistent to our estimate, any more than the ‘I’ will.

      My take – fix ‘I’ by drip funding iterations at the portfolio level, see what ‘R’ emerges from the initiatives you are investing in and make continuous go/no decisions at that level, *not* at the story/feature level.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Current day month ye@r *