- Do you want to do Agile Software Development? Yes – go to 2. No – GOODBYE.
- Is your team regularly reflecting on how to improve? Yes – go to 3. No – regularly meet with your team to reflect on how to improve, go to 2.
- Can you deliver shippable software frequently, at least every 2 weeks? Yes – go to 4. No – remove impediments to delivering a shippable increment every 2 weeks, go to 3.
- Do you work daily with your customer? Yes – go to 5. No – start working daily with your customer, go to 4.
- Do you consistently satisfy your customer? Yes – go to 6. No – find out why your customer isn’t happy, fix it, go to 5.
- Do you feel motivated? Yes – go to 7. No – work for someone who trusts and supports you, go to 2.
- Do you talk with your team and stakeholders every day? Yes – go to 8. No – start talking with your team and stakeholders every day, go to 7.
- Do you primarily measure progress with working software? Yes – go to 9. No – start measuring progress with working software, go to 8.
- Can you maintain pace of development indefinitely? Yes – go to 10. No – take on fewer things in next iteration, go to 9.
- Are you paying continuous attention to technical excellence and good design? Yes – go to 11. No – start paying continuous attention to technical excellent and good design, go to 10.
- Are you keeping things simple and maximising the amount of work not done? Yes – go to 12. No – start keeping things simple and writing as little code as possible to satisfy the customer, go to 11.
- Is your team self-organising? Yes – YOU’RE DOING AGILE SOFTWARE DEVELOPMENT!! No – don’t assign tasks to people and let the team figure out together how best to satisfy the customer, go to 12.
The other day I wrote a post entitled “Babies, Bathwater and #NoEstimates“. A gentleman named Trevor Nelson was kind enough to read my post and ask some very considered questions.
I started answering them as a reply in the comments section and quickly realised my answers were becoming worthy of a fully fledged post. And here it is
Q: What other reasons or intent have you found [for why estimates are requested]?
A: Certainly, estimates are requested to find out the cost and/or delivery time of something. They are sometimes requested to determine “what can be done” in a given timeframe, based on a maximum budget or market deadline (although this is rare IME – normally if there is a deadline there is also fixed scope :/)
What exactly are we estimating?
What I’m really getting at here is that it is crucial to understand the actual information the requestor needs. For example, perhaps the requestor wants to evaluate options for delivering a particular customer outcome, and what is possible within certain timeframes/cost boundaries, rather than just “how long”. When the request is simply “please estimate this” (as is often the case), the team tends to go away and scope out a complete backlog of stories (a project), and then comes back with a delivery date or an effort estimate.
The actual job-to-be-done (from the customer lens) has now been framed in a different way, not matching the intent of the requestor due to flawed assumptions in the estimation process. Perhaps there are one or more small experiments the team can run to validate whether it is worth spending more money (i.e. an MVP)? Perhaps the team can come up with a way of solving the problem in a different way than they are being asked to in a timeframe that would make the project worthwhile?
Another issue is that an effort estimate doesn’t actually tell the story of delivery timeframes. The real delivery time required depends on current priorities, WIP and lead time. Often when a stakeholder asks “how long” and they are told “6 weeks”, they believe the clock has now started, and expect to see the outcome in 6 weeks. But the reality may be that the team won’t have even started the work in 6 weeks time due to existing WIP and priorities.
So it is very important for the team to understand whether the requestor is looking for an effort estimate in order to make an ROI calculation or whether they actually are looking for the real timeframes involved. This is a hugely important distinction but is rarely considered IME.
All or nothing
Another example I’ve seen in the real world is “Estimate y – if it comes in at more than x, we will not do it“. Estimate comes back as more than x. Team is asked to estimate again, with more detail. Still more than x. So y is ditched. However, looking beneath the covers, there are small nuggets of value within y that would likely take only a few weeks, but because the whole request has been batched up as a project, the company and its customers lose out on doing something valuable.
Q: What happens when you don’t agree, or like, the intent behind the request?
A: I don’t think anyone is ever asking for estimates for sinister reasons, so I can’t imagine not agreeing/liking the intent per se. However, I always question the reason behind the request (for the reasons cited above), and sometimes it is clear that taking 1 or more teams away from what they are working on to scope/analyse/estimate a project or feature is not the right thing to do. For example, if there is an unmovable deadline then the work needs to be done regardless of how long the team think it will take, so to what decision is the estimate contributing?
Just-in-time analysis and Lean Canvases
Another reason to question a request to estimate might be that the work does not have a compelling business case behind it, or the rigour has not been put in to demonstrate/articulate the customer problem and business value (e.g. with a Lean Canvas), or there is no capacity to do the work based on current priorities. If further delving into the latter revealed that the estimate will aid a trade-off decision against a piece of work currently prioritised then it might be legitimate to ask the team to do this as part of their regular sprint planning/grooming.
Often teams are asked to spend a lot of time estimating work that has no business disrupting the team to be estimated – because it simply would never warrant being done, even if it turned out to be only 1 day of work.
Bottom line, I’m not sure there is ever a good reason to simply say “no” to a request for an estimate. However, an authentic, adult conversation about what information is being sought is not an unreasonable thing to aim for in my view.
Q: Don’t those paying the fee have a right (and expectation) to know when they’ll be getting the product for which they’re paying, and how much it will cost?
A: Short answer: Yes! My point here is around the word “commitment”. An estimate should never be treated as a commitment or a promise. It is an estimate. The right thing to do is to continually update the estimate as we learn more information, and keep an open dialogue with the customer to make sure risks are surface as we learn about them so there is plenty of time to mitigate them. Customers/stakeholders hate surprises (as do development teams)!
Stability ==> Predictability ==> Ability to commit to prices/timeframes/outcomes
That said, I do believe that we should focus on getting really good at building the things we build, and understanding how long those things take and to what level of quality. To do this, we need to have the discipline as a company to build stable teams and deeply understand the types of work we do. Unfortunately this is rarely the case. You can read about the “barriers to entry” to #NoEstimates in a post I wrote a couple of years ago.
I believe we should be able to make commitments of price or time to our external customers. I talk about this in another post. Internally, we can get there, but not with the prevailing way that things are done. The prevailing way things are done requires being predictive and deterministic rather than empirical and probabilistic, and is generally in a low trust environment.
Do the best you can
In such an environment, the most we can do is commit to building the best possible product in a given timeframe (my preferred Agile approach). We can also commit to delivering a particular outcome, assuming we are given the time to deliver that outcome. We can always commit to doing our best. What we should not be doing is committing to an estimated timeframe to deliver a fixed set of scope.
Everyone is doing all they can, given the situation they are in. In suboptimal and complex environments, it is expedient for us to iterate our way to solving problems and producing great outcomes, managing our options and priorities as we go.
I believe if we could leverage such an approach more readily then we would be far stronger as a software development industry.
Thanks again Trevor, I hope that answers your questions
OK, hopefully this post will put an end to the endless straw man arguments and attacks made against anyone who dares to tweet with the hashtag #NoEstimates, or who says that an estimate is a different beast from a forecast.
It probably won’t though. (Is that an estimate or a probabilistic forecast? Hmm – I digress)
FACT: It is possible, and sometimes necessary, to estimate software tasks and projects;
Yes, please read that again and let it sink in so I don’t have to defend myself against that old chestnut again. That said, I strongly believe that people very often estimate as force of habit rather than to generate valuable information, and estimation is certainly not the immutable practice that many seem to think it is. Often, the “need” to estimate is purely because our job depends on it rather than it being a genuine need.
FACT: Questioning the intent behind a request for an estimate is the professional thing to do;
Saying “no” when asked for an estimate is not the same thing, and has never been advocated by anyone I know. Not questioning the request for an estimate to get more information and understand the true need behind the request will probably lead to a mismatch in expectations and bad outcomes.
Nor is it true that someone is going to look at a #NoEstimates tweet and say “Right, that does it, I’m saying No to estimates!“.
FACT: #NoEstimates is a Twitter hashtag and was never intended to become a demand, a method or a black-and-white truth;
In fact, it was never intended to be anything, as is usually the case when people decide to hashtag something in their tweets. #justsaying #NoSQL
FACT: The #NoEstimates hashtag became something due to the interest it generated;
Nothing more sinister than that, I’m afraid. The topic has resonated with thousands of people across the world. Great. But, bizarrely, people continue to attack the early people who used it, such as myself, Woody Zuill and Vasco Duarte, saying “you should have called it something different“.
FACT: A forecast is a type of estimate, whether probabilistic, deterministic, bombastic or otherwise;
This does not mean that forecasting is the same as estimating. Another frustrating argument I’ve had today on Twitter, so wanted to clear that one up.
FACT: Forecast-ing is distinct from estimat-ion, at least in the common usage of the words, in that it involves using data to make the “estimate” rather than relying on a person or people drawing on “experience” or guessing;
It’s the different between saying “I think this will take our team 4-6 weeks because I did a similar thing once that took 5 weeks” and “I think this will take 4-6 weeks based on our cycle time metrics for this team, current WIP levels, capacity and priority of this feature“.
Another type of estimate is “Fuck knows, at least a month… better make it two… but honestly, fuck knows“. Sometimes that’s fine, but it’s an estimate and not a forecast. “I think it will piss down tomorrow” is me estimating the weather tomorrow when I am planning to go to the beach. I’ll leave the weather forecasts to the experts who study air pressure, weather fronts, satellite imagery and other real data.
FACT: People who tweet with the hashtag #NoEstimates, or indeed any other hashtag, are not automatically saying “My tweet is congruent and completely in agreement with the literal meaning of the words in the hashtag”;
So, for the love of all that is holy, can we please move on with this argument. it circles round… and round… and round… and round again.
FACT: The prevailing way estimation is done in software projects is single point estimation;
That doesn’t mean I’m attacking estimates, or estimation. I’m just saying that when managers/customers ask for an estimate, they mostly want a number or a precise date.
FACT: The prevailing way estimates are used in software organisations is a push for a commitment, and then an excuse for a whipping when the estimate is not met;
This majorly sucks, and it is the reason why people get very heated and upset about this stuff. There is a horrible dysfunction prevalent in almost every organisation I’ve ever worked in or observed, large and small, validated by almost everyone I meet in the industry, and this dysfunction is heavily influenced by shitty application of estimation and estimates.
FACT: The above fact does not make estimates a useless artefact, nor estimation itself a useless or damaging activity;
Pleeeeease, can we move on from this argument? When you continue using this in attacks against “#NoEstimates proponents”, do not be surprised when they stop engaging. They are not engaging because of your crappy circular argument, and in many cases downright rudeness, not because they don’t want to discuss the topic.
#NoEstimates has had an amazingly positive influence on the software industry, not least because it has generated healthy discussion between developers and managers that simply wasn’t happening before.
So, I beg you, can we please move on from the arguments listed above about the bloody hashtag and the difference between “forecast” and “estimate”? Let’s use our collective influence and intelligence to take the discussion forward to how we can cure the horrible cancer in our industry of Estimate = Date = Commitment.
Next Friday, please be careful not to use the word “resource” at work when referring to people. If you catch someone else doing this, please call them out.
If you’re not sure why this is important, please click here.
What else can you do to support this cause?
- Spread the word by sharing these pages and tweeting with the hashtag #WorldNoResourcesDay
- In meeting invitations, and/or at the start of meetings, mention to people that it is #WorldNoResourcesDay and ask them to respect it
- Have an open conversation in your workplace about the topic
- Do something to help someone have a great day at work
- Speak to everyone you encounter with respect, authenticity and a genuine smile on your face
Jim Benson (@ourfounder), author of Personal Kanban, wrote a blog post recently listing “five estimate pathologies”:
- Guarantism – The belief an estimate is actually correct.
- Promisoriality – The belief that estimates are possible
- Swami-itis – The belief that an estimates is a basis for sound decision making
- Craftosis – The assumption that estimates can be done better
- Reality Blindness – The insistance that estimates are implementable
It’s a nice post, and one close to my heart (#NoEstimates and all that).
I agree with all of these, and I think there is at least one more damaging pathology with estimates in our industry. I will call it:
Accepti-ism/Resign-ism The belief that, when someone asks me for an estimate, I should simply go off and tell them “how long” rather than ask more questions about the actual information required and the underlying need.
Much like when managers ask developers to write crappy code to be quicker, developers have choices in how they approach the situation, which can allow them to keep their professional integrity intact.
To deliver high quality software that solves a customer problem, we need to ask what the time/budget constraints are. What is the impact of taking 6 months versus 3 months (cost of delay)? Is there a firm market deadline, such as a customer commitment or a date after which we lose an opportunity? We also need to know if there are fixed requirements, and what exactly the outcome is we are looking to achieve.
Without this information we cannot make good decisions when we are prioritising one thing over another, making trade-offs, trying to simplify things. We will also not be sure how best to measure progress. Forecasting how much we can get done by a deadline is very different from validating a customer problem or solution hypothesis. Or actual value realisation, such as increase in order value or conversion rates. A burn-up chart simply won’t cut it for these, and outcomes don’t necessarily neatly happen within time-boxes.
We tend to shoot ourselves in the foot with estimates (I wrote a blog post last year about this). When asked for them, let us find out more information such that we can use creative problem solving to provide options. Are we building a solution to fixed requirements by a particular date? Are we providing time/cost options for a specific outcome? Is the customer problem understood or do we need to validate that first? Do we need to iterate over solution hypotheses or is there already good research around the solution you want to provide?
“In 3-4 months we could probably provide capability X and Y. Z is possible, but might take longer. We’ve done something similar before in that timeframe, but there’s more uncertainty/risk in this one. We’re going to run an experiment in week 1 to learn more, and keep you updated with progress as we go along.”
There are many questions to ask, and we’re simply not doing our job properly if we just dutifully go away when asked “how long will this take?” and come back with a number or a date.
A controversial blog post written last year (not by me this time 😉 ) is doing the rounds again on Twittersphere and has provoked a lively debate/argument in the last couple of days.
It suggests that Scrum is broken for many reasons, but one of those reasons cited is that the 15 minute time box for Daily Scrum (aka Standup) is too short to allow for meaningful conversations, and promotes a STFU culture.
IMO, in the situation the author describes, Scrum has exposed that perhaps the author and his colleagues feel their time together in the morning is so precious that they resent ending it artificially.
As the Scrum Master in this situation I would be trying to find out why they are not able to continue talking after the Daily Scrum? Perhaps they struggle to find the time or will to collaborate during the working day, so being forced together by the Daily Scrum gives them an opportunity that they do not want to cut short?
In other words, perhaps Scrum is exposing existing issues that are nothing to do with Scrum?
Imagine Scrum is taken out of the equation. There is now no formal meeting in the morning to bring these colleagues together, so they must find other opportunities in the day, the week, the month, to share the information they need to share.
Nothing is stopping them from doing this without Scrum, and nothing is stopping them from doing it with Scrum.
So what should we do when we are frustrated with the Daily Scrum time box? Luckily, Scrum also has a retrospective to help the author’s team uncover why they feel they need to spend longer together in the morning versus collaborating more throughout the day. An insight will emerge from this, and they will adapt.
It is easy to blame the Daily Scrum for this situation and react by getting rid of the time box, the Daily Scrum, even Scrum itself.
But why do this when instead we can ask “why”, and perhaps find an underlying situation to address which is nothing to do with Scrum?
Perhaps we are afraid to ask ourselves the really hard questions about our undesirable situations at work, and thus it is inevitable we will revert to blaming the process we are using – anything to avoid confronting the real issues. This is a common pattern from Scrum teams, and organisations adopting Scrum.
I am not suggesting that I have nailed the problem that the author is having in his workplace. I might be way off the mark. Perhaps Scrum is a bad fit for his team or his organisation. Perhaps Scrum is flawed.
But, perhaps, the argument around the time box hasn’t exposed a flaw with Scrum, but rather an opportunity for the author to work with his team to improve working relationships?
My name is Neil, and I’m a methodologist.
I use principles from Agile, Scrum, Kanban, Lean Software Development, Toyota Production System, Theory of Constraints – and many other schools of modern management thinking – in how I do my work, every day. I do not embed my thinking exclusively in any one of these. I try to think contextually while also recognising patterns that apply across contexts (I believe that playing the context card is often a cop out).
I believe that in order to be effective at something in a complex environment, such as an organisation attempting to build software products, it is important to have methods. Approaches for how to do things. Approaches that can be articulated and repeated.
These methods do not necessarily have to be named methods (although such methods are often a good start, given that someone before you has put a lot of thought into how to be effective in a particular commonly faced situation). What is important is that you understand what your method is for dealing with that situation, and you can articulate it.
For example, organisations want to deliver software solutions rapidly to their customers. There are named methods to help you achieve this. Or you can choose to do things differently. But can you describe how you approach your software development? Can you sketch it on a whiteboard? Can you explain it to someone? What is your method?
Some startups want to keep their business model flexible such that they increase their chances of success. They apply Lean Startup principles. Others run experiments and build MVPs but ignore (or remain unaware of) other parts of the Lean Startup methodology. So what exactly is their approach? Can they explain when they make the decision to build MVPs and when they don’t? Do they know how and when to make pivot/persevere decisions? What method are they using?
Similarly, organisations want to continuously improve (or at least some teams/individuals do). What method do they use for this? Some try and run effective retrospectives in their development teams as part of a Scrum process. Some (few?) use Toyota Kata and try and embed a kaizen culture. Some try something else. But what is that something else? What is their method for embedding continuous improvement in the culture of their organisation?
Teams want to slice their stories to be small and simple, such that they can deliver things quickly to their customers, and receive fast feedback to ensure those things are valuable for the customer. Some use methods such as Elephant Carpaccio and the Hamburger Method or Slicing Heuristics to do this. Others take an ad-hoc approach, or don’t really slice stories at all.
Methods often get a bad name, at least named methods do. But without a method, how will you achieve what you want to achieve? How will you know you are being effective, and moving in the right direction?
An inception is a great way of kick-starting a collaborative piece of work between people or teams who are coming together for a specific purpose, often for the first time. An effective inception results in shared understanding of all aspects of the work to be done, builds relationships within the team and increases the chances of success.
In this guide I list activities and principles that can lead you towards an effective inception. The time you spend on each activity will depend on the circumstances. For example, if there is a rigid set of business requirements (such as for a compliance project), you might spend less time on the Who, What and Why and more on the How. If you are building a new product or feature, the opposite ought to be true.
What’s the purpose of an inception?
Given that the team will be working together to achieve the goals, it is important to come out of an inception with enough information to start the work, not to finish it.
The start of a project is the time we know least about what it entails. Given that we all intend to get smarter over time rather than denser, and that requirements or new information will emerge over time, it makes sense to allow ourselves the flexibility to defer decisions and commitments until we know more about the risks and rewards.
How long should it be?
An inception should be kept as short as possible, but long enough to achieve the desired outcomes. Typically this is 2-5 days. If this does not seem long enough, it may be worth considering if the requirements can be split out and prioritised such that the resulting work to be sequenced next is not a big, risky project.
Who runs it?
Anyone with workshop facilitation skills and a good understanding of what makes for an effective inception can run an inception. This might be a Scrum Master, a BA, or anyone else.
Given the intensity of being in a room with a group of people for a few days, it often helps to have 2 facilitators to make for a more smoothly run event.
It’s also a good idea for the facilitator(s) to be independent and have no stake in the work being undertaken (i.e. no “skin in the game”).
What are the intended outcomes?
The key takeaways of an inception should be answers to the following 3 questions:
- What are the Goals of this work?
- What is our Method to help us deliver these Goals?
- What are the Constraints that might prevent us delivering these Goals?
Let’s talk more about these outcomes and then delve into some activities, artefacts and rules to assist you in achieving them.
“If I’d asked my customers what they wanted they would have said ‘a faster horse’”
~ Henry Ford*
- Who is the customer (segment) for whom we are looking to address a problem/need?
- What is the customer problem/need?
- What capability are we looking to give the customer that they don’t currently have? (i.e. customer value)
- What are our assumptions? How do we know this is a problem/need for the particular customer segment?
- Why should we address the problem/need? (business value)
- How might we do this? What are our high level solution options/hypotheses
- How will we know if our chosen solution option is the right one? (Build-Measure-Learn, testing a hypothesis through experimentation, MVP, etc.)
- How do we ensure that, in addressing the need, we provide a wonderful experience for the user/customer?
*There is no evidence that Henry Ford ever said this. However, it is a very good analogy for why it is important to deeply understand the customer need and not just deliver what they (or someone representing them) are asking for.
- How will we work together?
- Team roles/responsibilities
- Collaboration tools (physical/virtual boards, pairing, shared documents, etc.)
- Length of iterations/No iterations
- How will we monitor progress toward our goals, and make this progress transparent?
- Release strategy (e.g. daily/weekly/monthly/driven by scope/etc.)
- What are our first steps on this journey together?
We have had a conversation about the potential value of the work. We also need to understand any time, cost or other constraints that might affect the delivery of a successful outcome.
“Non Functional Requirements” (NFR)
Sometimes we need to consider “necessary evils” and operational factors when looking to deliver something of value to a customer, for example:
There are many more examples of such requirements, and it’s worth considering how some or all of these might need to be factored into your development process, e.g. as part of Definition of Done, acceptance criteria for specific stories, etc.
RAID (Risks Assumptions Issues Dependencies)
We need to identify issues and address them as soon as possible. Risks can become issues, so we need to call them out and actively manage them. Assumptions can lead to bad decisions if we do not test them and be prepared to be wrong. Being clear about which of the “requirements” and/or solution options are assumptions rather than facts can help us greatly in producing a better result for the customer.
It’s also worth calling out all external factors and dependencies, technical and otherwise, anything that prevents the team from autonomously delivering the outcomes:
- Other specialist teams who are not 100% committed to this work that we will need help from, such as operations, marketing, finance, design, testing or performance teams
- 3rd party vendors
- Distributed systems
- Technical infrastructure dependencies (e.g. server/platform requirements, lengthy or complex deployment/release process, etc.)
This topic deserves its own section because it is so important in the context of setting teams up for success, and establishing trust and transparency across the business. Please read my blog post “How to not shoot yourself in the foot with estimates” for a bit more context on the reasons behind this.
It is customary to approach this part of an inception by simply establishing all the scope required to deliver the goals, then estimating how long it will take. However, this is usually fraught with risks such as:
- Attempting to be precise rather than accurate
- Providing a list of user stories and a delivery date for those stories, rather than a range of options for what might be achieved by a date,
e.g. “This will all be delivered on June 12th” versus “We are fairly confident we can deliver an initial release of solution A in June, but solution B would be more like July or August”
- Not providing confidence intervals or sufficient error margin for estimates when delivering a more fixed set of requirements
- Providing a list of user stories and a delivery date for those stories, rather than a range of options for what might be achieved by a date,
- Not understanding the true constraint, hence being ill-equipped to make good decisions when designing solutions and making trade-offs
- Estimates being treated as a commitment/promise of when the team will be “finished”, rather than what it is – an estimate!
- Underestimating the impact of other work on delivery times
- Estimating only development effort and not considering the true end-to-end delivery (lead) time, taking into account dependencies in marketing, operations, etc.
- Underestimating the time it takes for new teams to become productive (see “forming, storming, norming and performing”), and the increased variability of story delivery cycle times for the first 8 weeks or so
- Assuming that there is a logical “end” to the work (project), on a date or after delivery of a set of features, rather than looking at the work as a value stream which will be continually optimised for value until there is something more valuable to work on
The output of the “what will I get and when?” conversation should entirely depend on the circumstances:
– Is there a deadline driven by a client/market commitment? If so, what’s the impact to the business and our customers of missing this deadline (cost of delay)?
– Is there a deadline driven by a budget/time constraint?
If there is a deadline, use this as the driver for creative problem solving within the given constraint. Have 3 months? Ask: Is it feasible to solve the problem in 3 months? What about 2 months? 1 month?
If there is not a deadline, what is the impact of delaying the work? Or not doing it at all?
Assuming the decision has been made to do the work, create your own constraints to drive an iterative approach to thinking, design, development and decision making. Ultimately you are managing the risk of not delivering the right thing within the constraints. Still think in terms of solving the problem as quickly and simply as possible, get feedback, learn and iterate. Timeboxed iterations (sprints) work excellently for this
- Strategic goal
– Is the outcome about creating happier existing customers, acquiring new customers, opening up a new revenue stream, exploiting revenue from an existing stream, cutting costs, differentiating yourself in the market, reducing business risk or something else? The answer to this questions absolutely impacts whether you should be driven primarily by dates, cost, value, outcomes or a combination of these.
What are some typical activities in an inception?
- Explain the thinking behind having an inception
– this will be a new concept for some or all participants, so it’s important they understand why this approach is being used versus a traditional requirements gathering exercise
- Hopes and Fears
– find out people’s expectations from both the inception and the work itself; often this exercise produces quite emotional responses, such as from people who have been burned by unsuccessful collaborations in the past, or who are sceptical about the approach you are taking; look to build confidence in the “hopes” and alleviate the “fears” throughout the inception
- Questions board/Parking Lot
– find answers from participants to as many questions as possible raised during the inception, and create actions to get the other answers
– write up ambiguous terms and acronyms throughout the inception
- Story Map
– map out the activities, roles and desired capabilities (the Who and the What), prioritise and create slices through the result (starting with a “walking skeleton”)
- High level solution options
– the How; don’t go too deep here, or there is a risk you are committing too early to a particular technical solution; again, you’re looking for viable options and perhaps enough detail to get started, not to flesh out an entire solution design
- Team charter/contract
- Collaborative games
– useful to establish “safety” (honest, open, fearless, authentic environment) and build relationships/trust
It is important to establish some minimal rules with the participants for the inception to ensure that the desired outcomes are reached, and that everyone gets to fully participate.
The following are some examples of rules you might consider:
- 1 person talks at a time
- No laptops
- No smartphones
- Stick to timeboxes
What do you think about inceptions? Have I missed anything useful out of this guide? Do you disagree with anything I’ve included?
I’d love your feedback!
Following on from my recent post “How do I know I’m an effective Scrum Master”, I have applied similar thinking to the role of the Product Owner in this sister post.
The article is an attempt to give the Product Owner role a fuller and more worthy definition. I describe the responsibilities and behaviours that I think are important. Some are straight from the Scrum Guide, or influenced heavily by it. Others are my own take on what is important.
From these responsibilities and behaviours I have derived some possible goals that Product Owners can set themselves to help demonstrate both their overall value and that they are on a continuous improvement path. It is vitally important that we can all understand how to be effective given the capacity we have in our particular role. This requires a way of measuring our effectiveness by ensuring we are carrying out our responsibilities and that we are achieving progress because of this.
Note this is not about “performance management”. This is about having a way of monitoring our own progress and that of our peers, and ensuring we are always trying to improve. The goals for each role must marry up and be congruent, such that everyone is pushing in the same direction.
Much like a burn up chart, individual performance management incorrectly focuses on output rather than outcomes. Efficiency over effectiveness. As systems aware, lean thinking knowledge workers, we are more interested in improving system (organisation) performance. A focus purely on individual performance is a local optimisation. But that is a post for another day
As ever, I’d really value your feedback, dear reader!
In a Scrum sense the Product Owner is a single person responsible for optimising the value of a product, iteratively improving it until it no longer requires or warrants further investment.
The reality in many organisation is that the Product Owners do not necessarily have full empowerment from this perspective. However, they provide a pivotal role in ensuring the needs of the business are being represented in a clear roadmap of solving customer problems through product strategy, and that the development teams are working on the right thing at the right time.
- Inspire the team and all stakeholders with your overarching vision on what you think needs to be delivered in order to accomplish the company’s strategic goals, and why you think this is the right approach
- Bring together disparate and competing priorities from all stakeholders into a clear, compelling roadmap for your team
- Optimise the value of the work your team performs
- Own and manage the Product Backlog
- Clearly express each Product Backlog item at the appropriate level of granularity, ideally in a way that concisely articulates the problem it is looking to solve or goal it is looking to achieve; use techniques such as acceptance criteria and story slicing to create this clarity of intent
- Identify and socialise fixed dates (deadlines) after which the value of Product Backlog items diminishes, such as customer commitments, regulatory requirements or key dates in the market
- Order the items in the Product Backlog to best achieve the company’s and its customers’ goals and missions, bearing in mind the impact of delaying items which have deadlines
- Ensure the Product Backlog is visible, its intention and how it was ordered is transparent and clear to all, and it shows what the team will work on next
- Ensure the team understands items in the Product Backlog to the level needed
- Ensure the team is focused on building the right thing at the right time
- Ensure the team has a safe environment in which to experiment, learn, collaborate and challenge ideas
- Be a customer champion, putting the customer at the centre of everything you do
- Be prepared to be wrong; Call out assumptions and test them with real customers as quickly as possible
- Collaborate with the team on each Sprint’s objectives and understanding the work involved, making trade-offs where necessary
- Negotiate scope and priorities with the team, both between Sprints and during them, as more is learned
- Monitor progress toward customer and business goals, ensuring the team’s progress toward these, and artefacts representing this progress, are visible and transparent to all stakeholders, at least in the Sprint Review; Remember that “Working software is the primary measure of progress”, and that “working software” is not just software that works but is “software that is doing work for a customer”
- Ensure that each “done” increment of product/service represents an end-to-end user scenario – something of value to an end user (e.g. a capability is given to a user that they didn’t have before) – and there is at least one acceptance test (preferably automated) that confirms the software does what the customer expects it to do
- Decide when to release the product/service as it stands (i.e. all “done” increments), negotiating with Product Managers and other stakeholders where necessary
- Deeply understand the customer and their needs, and form opinions on how your team can best serve these needs
- Continuously validate via customer/user conversations, testing, research and analytics that the team is on the right path, and be prepared to change tactics when appropriate
- Always be able to articulate the current customer need your team is trying to meet, what they are doing to accomplish that and what they intend to do next
- Employ an iterative and incremental approach to delivering solutions – in addition to upholding the three pillars of empiricism: transparency, inspection and adaptation – in order to optimise predictability and control risk
- Given constraints such as fixed cost, time or scope, look to generate options and defer commitment to one particular option as long as possible; Options have value
- Use collaborative approaches to understanding customer needs and evaluating solution options, such as workshopping, story mapping and story slicing (e.g. walking skeleton, elephant carpaccio, slicing heuristics, etc.)
- Continuously improve the way you and your peers work, such that shared goals can more effectively be met
- Understand and practice agility
- Foster and encourage face-to-face conversation and collaboration such as swarming, pairing, mobbing and workshopping
- Help the team evolve their “Definition of Ready” and “Definition of Done” to incrementally improve quality of customer outcomes and traceability to business outcomes
- Influence the team toward a continuous improvement and experimentation culture for both product and process, ensuring that each Sprint becomes more effective and enjoyable
- Work with other Product Owners to increase the effectiveness of product ownership and its application within the principles of Scrum/Agile in the organisation
—– ♢ —–
“Do right. Do your best. Treat others as you want to be treated.” ~ Lou Holtz
- Always listen and be present
- Show empathy and respect for people’s situation, role, ideas and needs
- Encourage simplicity over complexity
- Tell stories that inspire rather than treating the Product Backlog as a queue of tasks to be done
- Always have empathy and advocate for the customer, and help the team to have a similar focus
- Allow software craftsmanship to thrive; Never infer or imply that teams should sacrifice the quality of their solutions for a quick win
- Recognise that speed of progress is about choosing the right things, working in small increments of value, embracing feedback/learning and delivering with technical excellence rather than “increasing velocity”
- Be a “true North” for the team, but always value their input in how to move forward
—– ♢ —–
Effectiveness is the consistent achievement of goals. Manifesting responsibilities and behaviours into clear, measurable goals will help Product Owners on their improvement journey, which in turn will lead to the organisation becoming more effective. It will also demonstrate the immense value a Product Owner can bring to an organisation.
The goals should be congruent with the behaviour and responsibilities such that embracing these will naturally leads to achieving the goals.
Note: the term “user story” is used below to represent any work item that delivers end-to-end customer value – it does not prescribe the use of user stories.
- Product Owner NPS consistently over 50 (ask the team “How likely is it that you would recommend your Product Owner to another team?”)
- Customer NPS consistently over 50 for your products/services
- Reduce Lead Time (average time from a user story being added to the backlog to working software being used by a customer)
- Reduce Cycle Time (average time a user story is in process, aka elapsed time from work started to work completed)
- Current strategic and product goals, and progress toward them, are visible and can be clearly articulated
- Product Backlog is visible, ordered by value and this value can be clearly articulated
What about you? How do you know you’re an effective Product Owner, or that the Product Owners are effective in your organisation?
I often hear the role simplified to descriptions such as “someone who removes the team’s impediments” or “team coach“. I believe such descriptions sell short what can be an extremely beneficial role to embrace in your organisation, and often result in a situation where it is deemed OK to make anyone who puts their hand up the Scrum Master. We don’t tend to do that with Developers, Testers, Business Analysts or Project Managers, so why do we place so little emphasis on the attributes that make up a great Scrum Master when we’re trying to embrace an Agile/Scrum approach to working?
Here is my attempt to give the Scrum Master role a fuller and more worthy definition. I describe the responsibilities and behaviours that I think are important. Some are straight from the Scrum Guide, or influenced heavily by it. Others are my own take on what is important.
From these responsibilities and behaviours I have derived some possible goals that Scrum Masters can set themselves to help demonstrate both their overall value and that they are on a continuous improvement path. This might be very helpful, and not only if your organisation still uses performance reviews 😉
I’d really value your feedback, dear reader!
Many consider the Scrum Master role as part-time. Some question what a Scrum Master actually does! In my opinion the Scrum Master role is full-time, and I consider it to be absolutely instrumental in fostering a collaborative, humane, happy and productive environment in knowledge work industries.
Great Scrum Masters help companies embrace openness, honesty, transparency and authenticity. They seek out frequent feedback so they can improve along with their teams. They try to be honest and authentic with themselves and others. They recognise that it is up to them to be the change that they desire.
There is always something to improve in both the environment and ourselves. As such there is always something for a Scrum Master to do
… to the Scrum Team (aka Product Owner and Development Team)
- Help the team quickly make visible, remove and prevent blockers/impediments to progress
- Support and coach the team in whatever activities are required to regularly deliver shippable increments of product (fully tested, deployed, documented, being used by a customer)
- Ensure the Product Owner knows how to arrange the Product Backlog to maximize value, and find techniques for effective Product Backlog management
- Help the team understand the need for clear and concise Product Backlog items, and achieve this through techniques such as user story slicing and acceptance criteria
- Understand product planning in an empirical environment
- Ensure the team’s progress toward customer and business goals, and artefacts representing this, are visible and transparent to all stakeholders
- Understand and practice agility
- Foster and encourage face-to-face conversation and collaboration such as swarming, pairing, mobbing and workshopping
- Ensure that all Scrum events are happening, but do not necessarily conduct the event (Daily Scrum is a case in point here – the Development Team should conduct this meeting)
- Facilitate Scrum events as requested or needed, ensuring that all participants understand the event’s purpose and keep to the time-box
- Help the team evolve their “Definition of Ready” and “Definition of Done” to incrementally improve quality of customer outcomes and traceability to business outcomes
- Ensure the team’s process is visible and understood by all team members
- Ensure the team’s “system” – i.e. Lead Time, Cycle Time for various types of work and WIP limits as appropriate – is understood by the team and stakeholders
- Influence the team toward a continuous improvement and experimentation culture for both product and process, ensuring that each Sprint becomes more effective and enjoyable
- Encourage and coach team ownership of their process, task wall and workspace
- Coach the Development Team in self-organization and cross-functionality
- Help the Development Team create high-value products and services
… to the Organisation
- Support, coach and train the wider business in the principles of Agile, Lean and Scrum, particularly empirical product development
- Work with other Scrum Masters to increase the effectiveness of the application of Scrum in the organisation
- Support and coach people outside of the team, including other Scrum Masters, to help them thrive in an agile organisation
- Influence toward a continuous improvement and experimentation culture for both product and process across the organisation
“Do right. Do your best. Treat others as you want to be treated.” ~ Lou Holtz
- Always listen and be present
- Let others facilitate and take the limelight; allow leadership to thrive and see this as a success of your coaching style
- Show empathy and respect for people’s situation, role, ideas and needs, and adjust your coaching style accordingly
- Do not “push” Agile/Scrum/Lean/etc. principles/tools/practices on anyone; instead, try and understand what they want to achieve and help them become more effective in an agile context
- Change the environment, not the people
- Lead by example; be someone that people want to follow by inspiring them to unleash their inner potential
- Encourage simplicity over complexity
Effectiveness is the consistent achievement of goals. Manifesting responsibilities and behaviours into clear, measurable goals can help Scrum Masters on their improvement journey, which in turn will lead to the organisation becoming more effective. It can also demonstrate the immense value a Scrum Master can bring to an organisation.
The goals should be congruent with the behaviour and responsibilities, such that embracing these will naturally leads to achieving the goals.
Note: the term “user story” is used below to represent any work item that delivers end-to-end customer value – it does not prescribe the use of user stories.
- NPS consistently over 50 (ask the team “How likely is it that you would recommend your Scrum Master to another team?”)
- Reduce Lead Time (average time from a user story being added to the backlog to working software being used by a customer)
- Reduce Cycle Time (average time a user story is in process, aka elapsed time from work started to work completed)
- Development Team runs their own Daily Scrum
- Current strategic and product goals, and progress toward them, are visible and can be clearly articulated
- All current team impediments are visible, and an action plan for quickly removing them can be clearly articulated
- Evolutionary improvement of Definition of Ready and Definition of Done can be demonstrated
- The team’s current process is visible and be clearly articulated
- Continuous improvement experiments are run frequently and can be explained, both in terms of their formulation and outcome
What about you? How do you know you’re an effective Scrum Master, or that the Scrum Masters are effective in your organisation?
I’m seeing more and more evidence in the workplace that many of the problems caused by software estimation are a simple case of teams shooting themselves in the foot.
Cocking the gun
Here’s how it pans out. A team is asked to estimate a bunch of scope (Note – this may or may not be because someone in sales has made a commitment to a customer to deliver a set of features by a date and they need to make sure the team can deliver on that promise – or someone in marketing has gone ahead and planned a big marketing campaign and needs a bunch of stuff done in time for that date – let’s put those situations aside for now).
The team do what they are told and come back with an estimate. Let’s say 6 months (herein lies one problem that gets called out about our estimation process – precision rather than accuracy – this is a simple thing we can avoid, but that’s not enough).
Despite any claims to the contrary, this date now gets embedded in our thinking and the expectations of someone, somewhere. The date becomes a deadline for one or more stakeholders. The team innocently never thought of it as a deadline. They increment their merry way through the work. The inevitable delays happen. Other work infiltrates team members’ time. Because the team were not measuring progress against a deadline, they were not pushing back on the product owner to trade off other features for those extra features being added to the backlog.They were assuming that the PO was simply prioritising other things, and that everyone understands the idea of a product backlog is not to deliver everything on it. Unfortunately that is a flawed assumption.
Status – Green!
One fine, sunny day, the “deadline” begins to loom. Alarm bells start to ring. The team is not going to hit the deadline. They get berated for this fact. How did they not know earlier that this was going to happen? The team gets annoyed with the managers who are blaming them. They take to Twitter and tweet platitudes with the hashtag #NoEstimates 😉 The managers are annoyed with the team for providing crap estimates or not showing enough commitment to meeting the date. They are getting their asses kicked by their bosses. “That’s business”, after all.
"Guys, we must stop fucking up and take things more seriously. Screw this self-organisation nonsense, we need to tighten up our processes and get more detailed estimates from the teams".
I promise you, there are better ways of working! I have found this. Many others have found this. There is a simple way to decrease the chances of this crappy situation occurring, and it all stems back to the day when the team is asked to estimate the work.
If you are part of a team asked to estimate how long a bunch of stuff will take, say:
"Cool, no problem. Just so we have all the info, may we ask if there is a market deadline? A date we've committed to in some way?"
There is a real, meaningful deadline. This is a good thing to know, to get behind. In your inception, talk about possible ways to solve the problem in that timeframe. Then talk about how to solve it in a much shorter timeframe. Imagine you only have 1 month to solve the problem. Use techniques to slice the requirements into smaller, simpler chunks of value that deliver outcomes. If you can’t slice the work by outcome, you can certainly slice it into chunks that will help you glean useful feedback and measure progress more readily.
Doing this work provides options and frames things in a leaner way. Even if it is not possible to deliver all the scope there may be many paths to helping the requestor. Split out the work into 5 or 6 headline chunks with clear, achievable outcomes that can be prioritised/sequenced.
There is no real deadline, so there is a real danger this project will drift for eternity. To avoid this, do the same exercise as above for 3 month and 1 month timeframes. Create your own constraints and options. This provides much better info for the requestor, but also avoids the team creating a concrete deadline for themselves where none exists. Again, it will also help you measure meaningful progress toward your goals.
What would be the impact if this project was delayed for 6 months?
The answer to this question will help us understand more about the ROI margins, and the true urgency of the project. We can remind the PO of this fact every time new features arrive on the backlog. We understand the deadline, the impact of delay. Great! When anything comes along that might delay us, we will call this out. Make it visible. Transparent. Our Agile values coming to the fore. We don’t want to be berated for not hitting deadlines, especially ones we do not know are deadlines.
Put the gun away!
Without saying “no”, or trying to get managers to understand #NoEstimates concepts, or trying to change the culture – We – Teams – can make a huge difference to removing the typical dysfunctions around software estimates, simply by asking the right questions at the start of projects and being single-minded about delivering great outcomes for the customer and the business.
Why not try these techniques next time you are asked for an estimate? It will help you and your colleagues avoid shooting yourselves in the foot.
Performance reviews or appraisals are not a new thing; they have been an established business practice, in their current form, since the 1950,s .
From an HR perspective performance reviews are intended to be about retaining employees. This is important because people are expensive to lose and replace; retaining talented people is a key contributor to organisational bottom lines.
From the employee’s perspective, the performance review is intended to provide an individual with feedback on their contribution, which will allow them to develop their strengths. It is also a key opportunity for managers to explicitly recognise high performing individuals and take steps to ensure that the right conditions are in place to retain those people.
Unfortunately, a recent study suggests that they don’t do that very well .
So what can go wrong?
Conducting a performance review is fraught with dangers for both the appraiser and the appraisee. Key amongst these is a failure to recognise that it is not just an administrative process, nor necessarily about compliance. It is an important negotiation, which in theory results in the appraiser and appraisee committing to changing their behaviour in order to facilitate behavioural change in the appraisee. Such conversations require a great deal of skill to navigate, and can cause great harm if not done well.
And how effective is this pursuit of improving individuals’ performance anyway? Performance Reviews are based on the flawed premise that improving performance of individuals can improve that of the organisation as a whole – it can’t, unless we get lucky and improve the performance of an individual who happens to be the constraint at that particular time.
As Deming rightly pointed out,
"A bad system will defeat a good person every time".
In reality, performance reviews often have the opposite effect to what is desired, damaging the performance of effective individuals rather than improving it. This is especially true of knowledge workers; we know that those who think and create for a living are not motivated by carrot and stick techniques to improve performance .
Highly engaged employees tend to disengage in and around review time, especially if the review harbours some surprise negative feedback, or if the feedback is great but the performance is not rewarded. How many times have you heard someone say “I’m only staying until bonus time, then I’m leaving”?
Since an individual’s line manager often conducts performance reviews, there is great potential for problems arising from the power imbalance between the parties. HR departments tend to try to mitigate this by sending the message that the performance review must be about the behaviour of the individual, not their personality.
The argument goes that it is reasonable to have expectations about someone’s behaviours, since they are under that person’s control. However this is not the case for personality, which is intrinsic to the adult individual . That said, an understanding of the ways in which an individual’s personality might impact their behaviour, for example through the Myers-Briggs Type Indicator , may be of value to the individual.
The reality is that many managers simply don’t have the “soft” skills to provide people with feedback that isn’t (albeit unintentionally) insulting, patronising, plain wrong (a year is a long time to remember things and build a skewed perception of a person or incident), doesn’t smack of “you should try and be more like me”, or a combination of these. 360 reviews can be equally damaging for the same reason – it’s easy to hide behind a criticism in written text that you wouldn’t say to someone’s face, and this can have nasty consequences
Further, feedback on the manager is not built into the process; the review is all about the manager’s view of “the worker” rather than an opportunity for the employee to tell their manager how to be a better one. Even where there is an avenue to provide feedback “upwards”, it may not be provided due to concerns that negative feedback for the manager can be deemed as whining/complaining and thus be damaging to promotion prospects or even the employee’s current role.
This is especially likely when there is no transparency about whether feedback to the manager will be seen by her/his manager. This raises questions about whether managers are actually accountable for their performance as leaders of people or just their department’s results.
Additional problems arise from measuring someone against a set of KPI’s which may be poorly defined, or defined in detail but flawed in what they represent. KPI’s are usually tied to an individual’s activities or goals rather than accomplishments of the team or the organisation. This can lead to siloed behaviour (local optimisation), not to mention a feeling of “how on earth can I affect that KPI on a day-to-day basis?” or “what the hell does that KPI have to do with my performance?” For example, marking someone’s otherwise highly valued contribution down from very high to high, because they failed to submit all their timesheets on time.
Finally, most managers’ dislike conducting appraisals. People naturally do not want to be put into a position which forces them to criticise another individual or to feel they must evaluate their worth to the organisation. This leads to resistance which, combined with poor skills and poor appraisal instruments, results in the organisation receiving poor quality data and consequently not achieving their goal of retaining and developing valuable people.
The society for Human Resources Management  proposes the following as opportunities for improving the effectiveness of the performance appraisal process;
- Provide employee feedback more frequently than once a year
- Engage team in value of performance management process, incorporating employee development and training in part of review
- Use reviews as a means to express appreciation for personnel to help with staff retention
So why can’t a performance review be more like a retrospective?
“The Sprint Retrospective is an opportunity for the Scrum Team to inspect itself and create a plan for improvements to be enacted during the next Sprint.” 
Whilst it is acknowledged that the Scrum Retrospective premise is not perfect, it does offer a model which might better meet the objectives of the performance review. Many different approaches to conducting retrospectives are available and which expand upon the popular “what went well, what did not go so well and what can we do to improve” format.
Regardless of the model selected, the key benefits offered by the approach are: frequency, at the end of each iteration, a focus on evaluating the behaviours of the team (not their personalities), and has a requirement to identify concrete, time based steps which can be taken to improve performance.
The objective of the retrospective is to make things better for everyone, which in turn helps the organisation; it is deliberately non-judgemental and avoids (ideally) the issues of power imbalance that can arise from the traditional review. Most importantly, it explicitly identifies learning and development as an ongoing process, which must be both iterative and incremental.
The starfish retrospective model  seems particularly conducive to a more effective style of iterative/incremental performance review, especially the mutant starfish variant . This model is particularly good at identifying desirable outcomes and concrete steps to achieve them.
Consequently, it is interesting to ponder whether a performance retrospective process, conducted monthly or quarterly and where the discussion was informed by a model similar to this, would produce better results than the classic performance review that most people endure today?
- D. McGregor, “An Uneasy Look at Performance Appraisal,” Training and Development Journal, vol. 41, no. 6, 1957, pp. 66.
- “2014 Performance Management Survey – PerformanceReviews.net,” 2014; http://performancereviews.net/survey/.
- D. Pink, “Autonomy, Mastery & Purpose,” http://www.youtube.com/watch?v=wdzHgN7_Hs8.
- A.D. Avraham N. Kluger, “The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory.,” Psychological Bulletin, vol. 119, no. 2, 1996; DOI 10.1037/0033-2909.119.2.254.
- “Myers-Briggs Type Indicator,” http://en.wikipedia.org/wiki/Myers-Briggs_Type_Indicator.
- “SHRM Online – Society for Human Resource Management,” 2014; http://www.shrm.org/pages/default.aspx.
- J. Sutherland and K. Schwaber, “Scrum Guide™ | Scrum.org – The home of Scrum,” 2014; https://www.scrum.org/Scrum-Guide.
- “The Retrospective Starfish | thekua.com@rest,” 2014; https://www.thekua.com/rant/2006/03/the-retrospective-starfish/.
- “The mutant starfish Retrospective | Agile Karma,” 2014; http://agilekarma.com/2014/08/11/the-mutant-starfish-retrospective/.
The debates about “Agile tools” continue. #NoEstimates is a debate about estimation as a tool for delivering software projects. Elsewhere, tool vendors and their friends continue to promote how their tool is different from the other tools. How “agile” their tool is compared to others.
I’m all for tools that teams choose to support how they work. I’m in software for a reason. I love technology. I love tools! If a team chooses Jira to help them become more effective, or to help support their organisation’s journey to become more effective, more power to them!
Personally, I’ve never seen a team willingly choose Jira to manage their work. However, I’ve seen lots of managers choose Jira as “the organisation’s Agile tool” to “manage their teams’ efficiency”.
Jira is not to blame. It does what it does. It’s brilliant at “tracking tasks” and “monitoring individual productivity”. It also provides pretty graphs for executives if they’re interested.
However, in the case of choosing an Agile project management tool for your organisation (or team), or asking somebody to estimate a piece of work, has somebody – anybody – actually considered what problem the choice of tool is solving? It’s very easy to say “we’re going Agile, so we need an Agile tool” (and I’ve heard it said many times). Or “we’re starting a project, so we need to estimate it”. Every single time I have delved deeper into the decision, what seems obvious becomes not so obvious. An executive asking her managers for “better visibility” gets translated into a forced company-wide adoption of a heavy, expensive and ill-fitting ALM tool. Asking them when an initiative might get delivered turns into a witch-hunt of why estimates “aren’t being met” and a crack of the velocity whip.
It’s not Jira’s fault that it happens to be the worst kind of lumbering antithesis of everything that Agile is about. It’s not estimates fault that decision makers misuse and abuse them, unwittingly creating a damaging knock-on effect on culture and project success.
A hammer is only dangerous in the hands of someone who does not know how to use a hammer, or intends to use it for harm. The same can be said for Jira, Rally, VersionOne et al. The same can be said for estimates.
Those that wield the wrong tools for the wrong job in the name of Agile are far more dangerous than any tool will ever be.
Ryan Ripley wrote a nice article called “#NoEstimates does not stop agile metric abuse”.
Couldn’t agree more, Ryan!
Anyway, Ryan made some assertions in the article that give a false impression of #NoEstimates, at least from my POV.
Here are the comments I added to his post, with a few bonus bits added
1. I do not “want the agile community to stop estimating stories on scrum projects”.
I have explicitly recommended that teams do not simply stop estimating, just because of the #NoEstimates buzz (just read my other blog posts on #NoEstimates if you don’t believe me :).
My message is that teams should decide how much work they take on in a sprint and not be asked to do any further deterministic estimation of “how long this will take”. If the PO needs to make forecasts beyond the sprint about release dates, or how much might get done by a particular date, they should use probabilistic forecasting techniques, not ask the team. All of this is as per scrum guide.
Then, IMO, the PO should use this information to inform *prioritisation* decisions, NOT go/no-go decisions. “Shit, this feature might not get done if I leave it at number 10 in the backlog, so I’d better move it up”. This is good project management. Good risk reduction.
#NoEstimates is about making informed decisions, and I don’t believe asking the team how long something will take is the best way of getting the right information. Effort estimates do not account for lead time factors such as position in queue, priority changes and other delays. Even if they did, they will likely be wrong, so we should not place so much credence in them.
2. Story points and velocity being gamed are not “the reason” for #NoEstimates.
They are one of many dysfunctions around estimation culture, but actually even when story points are used in healthy ways I still think it is better to use queuing theory, empiricism and better project management than asking people “how long”.
3. The “conversations that arise from planning poker” are not a benefit of estimating.
They are a benefit of collaborating as a team around a potential piece of work. In fact, planning poker conflates estimation and conversation in unproductive ways. For example, quicker consensus on estimates come out of silent collaboration. Better planning certainly does not.
Planning poker explicitly avoids conversations when there is agreement about the story point size. There is much hidden risk within this perceived consensus. Many assumptions and ideas about solutions, who would predominantly do the work, prior discussions, etc. Not to mention all the cognitive biases we humans have when estimating tasks, and the likely lack of shared understanding of what story points mean in general, and what a “5 pointer” means in our context.
I advocate “story slicing” sessions instead of planning poker, with explicit policies on when to stop slicing (i.e. my “slicing heuristic”, contributing to the “definition of ready”).
4. You can provide transparency into progress by using cycle/lead times to show wait times in the queue/backlog without having to estimate every single thing.
In fact, the transparency is better with this approach. You can’t quickly derive a wait time from a backlog estimated in story points. Even if you line up PBI’s for the next few sprints (which is completely not scrum, but anyway), as soon as you want to change the priority of cards you end up with a headache of updating everything in order to keep the same transparency of being able to look at any card and say “this is how long we have to wait for that”.
5. Story point velocity is not required for the team to continuously improve and find their sustainable pace.
Cycle/lead times are better metrics to try and improve because they directly impact the customer (wait time). Even if you don’t game anything, increasing velocity does not necessarily reduce wait times for the customer. Proper prioritisation, an understanding of queues, good story slicing, good technical practices (quality, maintainability, agility) and a focus on value are more customer-centric areas on which to focus.
As for the sustainable pace part, teams can find that quite comfortably by slicing stories nice and small and counting them. Even better, they can work on one thing at a time (single-piece continuous flow) so there is never any risk of “taking on too much”.
As for the rest of the article about “coaching the wrong people”, I mostly agree
My family and I awoke to yet another beautiful morning. Lake Tahoe glistened an almost unrealistic blue across the way from our hotel room balcony.
We only had a few days to enjoy the serene beauty of this place, so we made a plan. There were particular places we wanted to visit, and activities we wanted to make sure we had time to do before we left, so we felt it was important to plan how we would spend each remaining morning and afternoon of our trip.
This particular afternoon’s plan was to visit Emerald Bay, a beautiful piece of beach a few miles round the Lake from our hotel, and to rent a couple of kayaks. Before that we wanted to spend some time swimming in the hotel pool with the kids in the morning, followed by watching the first semi final of the World Cup over lunch in a nearby restaurant (the game turned out to be Brazil’s unexpected mauling from soon-to-be-crowned-champions Germany).
After the game we jumped in the car and headed to Emerald Bay. We pulled up at the parking lot at around 4.10pm. For some reason our 6-year-old son Thomas was concerned that the kayaks would not be available, and would have closed at 4pm. We reassured him that there was no reason to worry. However, just to be sure we decided to check before we embarked on the 1 mile walk down to the beach, so my wife Kathleen called the company.
Lo and behold, the kayak rentals closed at 4pm. My son was devastated, and Kathleen and I were a little disappointed too. We comforted Thomas, assured him we would come back in a couple of days, and started driving back to the hotel.
On the way back, we noticed a sign for Eagle Falls. Kathleen mentioned that she had read it was a nice little hike and might be worth doing, so we decided to stop. We started the rocky walk down towards the distant sound of gushing water. The warm sun beat down on us as the sound got gradually louder.
We eventually made it to the falls. Small by the standards of some of the World’s more spectacular falls, but an extremely beautiful little spot. We perched on rocks and dangled our feet into the fresh, streaming water. I cupped my hands and drank from the pool. A perfect drink on this warm day, after a tricky walk/climb down through uneven boulders, rocks and terrain to where we now sat.
I gazed across at nature’s pool. A mini-lake of sorts, with a waterfall delivering gallon upon gallon of crystal clear liquid into its heart. I thought to myself how wonderful if would be to take off my shirt and swim in this small piece of paradise, but talked myself out of it because there was a group of women sitting on other rocks close by. Embarking on this adventure would also require some nifty manoeuvring over rocks below the surface of the water to get to the deeper part of the pool, in which swimming would be possible.
I then started to ponder the circumstance I was in, and how I never would have come to this place had our day gone to plan. We would have likely had a beautiful kayak across the glistening lake, and a lovely afternoon. But we would not have been at these falls. At this natural pool. I would not have had the opportunity to swim in this tantalising water from which I had drank, satisfying my parched lips and my thirst for a special moment.
I looked over at Kathleen and said “I’m going for a swim”. I realised that these opportunities do not come along very often. The weather was perfect, the water was perfect, and the moment was perfect. I seized it. Thomas and my 4-year-old daughter Anya wanted to come, so we started navigating the rocks toward the pool, holding hands. I had never seen Anya so confident on such a tricky surface. She looked like a natural in the stony environment.
We finally made it to somewhere we could stop and leave our hats and flip-flops, and I plunged into the pure, rippling water. The sunlight majestically twinkled on the perfect surface, and I glided across the pool, happy as a newborn duckling on its first swim. Thomas and Anya watched from the side. The group of women, and a few other people who had arrived since, smiled over at me and waved. Kathleen took some photos, and the kids giggled with delight. I was in a little piece of heaven for those 15-20 minutes. I felt cleansed, happy and alive.
John Lennon sung the words “Life is what happens to you while you’re busy making other plans”. Regardless, wonderful things happen to us that were planned. Those fantastic holidays. Our wedding day. The birth of our children.
But we must also remember that amazing things happen that weren’t planned. In fact, some of the most memorable days in my life were not planned. The day at Eagle Falls. That day in Vanuatu when we stumbled accidentally across the most beautiful beach we’ve ever been to. That day I bumped into an old friend and ended up spending the day laughing with them rather than doing the things I was intending to do.
Let’s make sure we allow plenty of time in our lives to head off the beaten track. To drive the open road. To immerse ourselves into the uncertain. Maybe, just maybe, something truly wonderful awaits us.
This is a concept I devised a couple of years ago, and it seems there is a new #NoEstimates audience that would like to know more about it.
A Slicing Heuristic is essentially:
An explicit policy that describes how to "slice" work Just-In-Time to help us create consistency, a shared language for work and better predictability.
Crucially, the heuristic also describes success criteria to ensure it is achieving the level of predictability we require.
The Slicing Heuristic is intended to replace deterministic estimation rituals by incorporating empirical measurement of actual cycle times for the various types of work in your software delivery lifecycle. It is most effective when used for all levels of work, but can certainly be used for individual work types. For a team dabbling in #NoEstimates, a User Story heuristic can be an extremely effective way of providing empirical forecasts without the need for estimating how long individual stories will take.
However, if you are able to incorporate this concept from the portfolio level down, the idea is that you define each work type (e.g. Program, Project, Feature, User Story, etc.) along with a Slicing Heuristic, which forms part of that work type’s Definition of Ready.
"A feature ready to be worked on must consist of no more than 4 groomed user stories"
“A user story ready to be worked on must have only one acceptance test”.
The success criteria will describe the appropriate level of granularity for the work type. For example, you might want user stories to take no more than 3 days, and features no more than 2 weeks.
Here is the really important part. The idea is not to slice work until you estimate it will take that long. You never explicitly estimate the work using the Slicing Heuristic. Instead, as the work gets completed across the various work types you use the heuristic(s) to measure the actual cycle times, and then inspect and adapt the heuristic(s) if required.
At the user story level, I’ve found the “1 acceptance test” heuristic to be consistently effective over different domains for creating an average story cycle time of 3 days or less. However, there are alternatives. Instead of acceptance tests you could use e.g. number of tasks:
"A user story must have no more than 6 tasks".
Here is an example Slicing Heuristic scenario for a Scrum team using the feature and user story heuristics described above:
- Product Owner prioritises a feature that she wants worked on in the next Sprint
- PO slices feature into user stories
- If feature contains more than 4 stories, it is sliced into 2 or more features
- PO keeps slicing until she has features consisting of no more than 4 user stories; they are now ready to be presented to the team
Note: Unless this is the very first feature the team is developing, the PO now has an estimate of how long the feature(s) will take, based on historical cycle time data for the feature work type; no need to ask the team how long it will take
- In Sprint Planning, team creates acceptance tests for each user story
- If there is more than 1 acceptance test, story is sliced into 2 or more stories
- Team keeps slicing until all stories consist of only one acceptance test
PO now has an even more reliable forecast of when the feature(s) will be delivered because she can now use the user story cycle time data in conjunction with the feature data
- Team delivers each story, and records its cycle time in a control chart
- If a story is taking longer than 3 days, it is flagged for conversation in Daily Standup
- Multiple outliers are a sign that the heuristic should be adapted in the Sprint Retrospective
- When the feature is delivered, its cycle time is measured also
- Again, if features are taking longer than is acceptable for the heuristic, the heuristic should be adapted to improve predictability (e.g. reduce maximum number of user stories per feature to 3)
The title is somewhat parodic. I hope it is taken in the intended spirit
The reason for the choice of title is that this post is a response to Bob Marshall (@flowchainsensei)’s “Eight Ways Customer Value Is Killing Your Business“, which itself was a rejoinder (at least in part) to a tweet I made the other day:
"Everything starts w/ understanding our customers & their problems. Economic value for an org is created by focusing on this simple premise." ~ Neil Killick (@neil_killick)
I will address some of Bob’s comments directly:
"A blind faith in the idea of “customer value” can cause many more problems than it solves."
Not sure where this quote is from, but I see it as a straw man statement. I don’t think blind faith in an idea solves any problems at all, let alone fewer than those it causes. Blind faith in any idea is damaging, and silly. So let’s move on from this.
Some folks seem to believe in “customer value” like it was the New Church.
The idea appears to have transcended logical enquiry and consideration, and become some kind of sacred cow. So be it. I do not subscribe. I guess that makes me an apostate.
These comments seem needlessly disparaging towards people who dare to put the customer at the forefront of what they do in their work. Even if a focus on customer value were somehow economically unsound, or otherwise ineffective, it is at least a noble focus, given some of the alternatives (profit, efficiency, utilisation, world domination, etc.)
My view? I see organisations that focus on customer value putting their business in jeopardy. Of course, there are numerous other ways to do that, too. But this particular path seems deeply ironic, given the number of self-styled experts who hail “customer value” as the salvation of business.
As does the scornful tone. Who are these jeopardous, ineffective companies focusing on customer value? Who are these “self-styled experts” teaching business owners how to save their business by turning their attention to, god forbid, their customers?
So, here are eight ways in which an incautious and credulous emphasis on “customer value” can undermine business success:
1. If you don’t mean it
Well, yes, of course. An “incautious and credulous” emphasis on anything can undermine business success.
Further, if we say one thing and do another, focus on the first thing isn’t actually happening. Is Bob implying here that by not meaning the thing we say we’re focusing on, we’re focusing on the wrong thing?
I’m pretty sure this is an incorrect assertion.
2. What about everyone else?
OK, this is a fair question. If we focus purely on “customer value” then we may let down internal stakeholders, and create a shitty environment for our colleagues. That is, if we indeed focus purely on customer value.
But that’s not what I meant in my tweet. Let’s revisit it:
"Everything starts w/ understanding our customers & their problems. Economic value for an org is created by focusing on this simple premise."
I am asserting that there is a simple premise on which to focus. What is that premise? Not actually “customer value“, but instead that “everything starts with understanding our customers and their problems“.
Some may see these as the same thing, or the difference subtle. I see them as very different.
The work we do for an organisation, whatever kind of organisation it is, must deliver some kind of value for that organisation, if indeed we wish the organisation to be effective at meeting its goals.
This value can take many forms, and these forms will largely depend on how the business is currently performing and its corresponding strategic focus.
This is always true, whether we have customers or not.
If our organisation is one that makes money by having customers, we must provide something that our customers or potential customers do not already have. This could be a revolutionary product, or simply serving a need more completely than a competitor product does.
Either way, the only path to economic viability and being a sustainable, customer-facing business is to identify the problems customers and potential customers have achieving their goals, and which of these are valuable enough for our business to spend time and money solving.
Or, to put it another way:
There is no point delivering customer value if we are not delivering business value.
Focusing on understanding and solving customer problems is not the same as focusing only on customer value and ignoring everything else. The latter is plain silly. We can’t sustain a business if we are making customers happy at the expense of our employees’ welfare, or our stakeholders’ sanity.
Employee and stakeholder needs must always be considered when determining business value.
Business value must always be considered when determining customer value.
The concept of meeting the needs of stakeholders, employees and customers is not — or at least does not have to be — at odds with a focus on understanding customer problems.
On the contrary, I see understanding customer problems as the most effective driver for delivering business value, happy customers, employees and stakeholders.
Number 3 in Bob’s list:
3. Narrow Definition of “Customer” and “Value”
Who do you regard as your customer? How do you decide what is of value to them? Do you define customers as (just) those folks that sign the cheques? And do you define value in terms of simple hard cash? If so, what about all those other folks who suffer your goods and services without a voice? And what about their (non-cash) experiences?
Yup. If we have a narrow definition of “customer” and “value“, we can hurt our business. But having a customer-focus does not imply we have a narrow definition of customer and value. We might have that whether we focus on the customer or not.
4. Confusion of Value Disciplines
Michael Treacy and Fred Wiersma describe three generic value disciplines: operational excellence,customer intimacy and product leadership in their book The Discipline of Market Leaders (1997). They go on to make the case that any given business can and must focus on just one out of these three disciplines. Many organisations have yet to realise this.
I’m not sure how being customer-focused is at odds with this. Again, if you cannot discern what business value means for your business at any given time, you are in trouble.
We should only solve customer problems that deliver business value.
If we spend time determining and solving customer problems that do not deliver business value then we will not be as effective as we can be, or we might go out of business. That business value might be derived from, at any given time, operational excellence, customer intimacy or product leadership.
But each of these things only matters a jot if we have customers with problems to solve.
5. Unintended Consequences
In his book “Obliquity”, Alan Kay makes the case for approaching one’s goals obliquely. Rushing headlong at “customer value” can often result in many unintended consequences. A more indirect approach, such as providing value to customers by building an organisation or workforce with the capability to do so “baked-in”, and evolving continuously, can avoid many of these unintended consequences.
“Rushing headlong” into anything is silly, even if that thing is “customer value“. When I talk about being customer-focused, I mean the systemic, “baked-in” and “evolving continuously” kind that Bob speaks of, not the oblique, vacuous way.
6. Choosing the Wrong Kind of Value
Bob argues that there is a “right kind of customer value“, and that it is unlikely that folks will deliver on that.
Well, firstly, yes. If we want to deliver customer value, we ought to mean the “right kind“. The holistic, long term, effective kind. Not the short-sighted, ineffective kind.
Secondly, it is very important, nay critical, that we work on the system such that folks understand the business goals, and how solving particular customer problems pertains to these goals. If we do not have aligned goals across our company, we will be ineffective, regardless of whether they are customer-focused goals.
Such a dysfunction is not caused by being focused on understanding and solving customer problems.
7. Conflating Means With Ends
In his book The Goal, Eliyahu Goldratt asks the fundamental question “Why are you in business? What’s your goal?” Having happy customers is a means to a commercial organisations’ goal, not an end in itself. Yes, even a necessary means (see: Necessary But Not Sufficient). But not sufficient.
Focusing on “happy customers” is not the same as “understanding customer problems“. Regardless, neither of these things is sufficient for a profitable, sustainable company. But there is no one single thing that is sufficient on its own, so this is another straw man statement.
The fact remains:
An organisation’s commercial goal can only ever be reached by solving customer problems that derive enough business value.
Our revenue, and consequently profit, is generated by our customers. If we have no customers, we have no revenue. No profit. The buck for our success starts and stops with having enough customers forking out enough of their cash in return for us continually solving their problems.
8. Strangles Innovation
Focusing blindly on customer value can drive short-termism in the organisation, because the connection between longer-term investment in e.g. innovation and the customer value of such proposed innovations is often hard to see.
This is an interesting one. The short-termism Bob speaks of is certainly a risk, but I would hardly cite customer-focus as a primary cause of this innovation strangulation.
If anything, what I see more frequently is an obsession by folks in senior management to solve the company’s problems (e.g. cost-cutting, how can we acquire customers, how can we get people to come back, etc.) rather than the customer’s problems.
I see this reverse tactic as far more damaging than focusing only on solving customer problems.
While, as I’ve described, a pure focus on customer value can be ineffective, it sure as hell stands a better chance of effectiveness that focusing on something that isn’t about the customer.
The #NoEstimates debate/movement is in its 3rd year. Many of the principles upon which it is founded were being used by practitioners many years earlier. The Agile Manifesto, along with its statement “Customer collaboration over contract negotiation”, is approaching 13 years old.
I feel it’s time to start addressing the “barriers to entry” of #NoEstimates. I (and others) have banged on enough about the whys and wherefores of not estimating. Let us now start discussing ways of solving the problems that force us to estimate, or at least give us the perception that we must.
I’d love it if the Agile community would join me!
THE DIFFICULT 2nd ALBUM
In order to be able to work this way, there are fundamental impediments that we must remove.
And one of those is the traditional software contract. It may be the most important of the impediments, particularly when we are talking about doing work for external customers.
The way such contracts are negotiated and executed remains largely driven by up-front thinking such as plans, schedules and estimates, and tainted by paranoid contingencies for failure to deliver things to the letter.
Both #NoEstimates and Agile promote continuous value delivery, i.e. to always be building the next most valuable thing for the customer, and delivering this in rapid vertical slices. With such a desirable working agreement in place, the need for a fixed price contract built around an up-front estimate of everything to be delivered is negated.
But in order to be able to work in such a world we need to build trusting, flexible and collaborative working relationships with customers. Some have made attempts to create more “Agile” contracts, but legal constraints coupled with a resistance to changing long-established practices often render these as not hugely different in essence from traditional contracts.
That is, most of the attempts I’ve seen at Agile contracts remain constrained by the legacy of analysis, design, development and testing phases, a focus on delivering agreed scope (output) rather than solving a problem (outcome), and a desire to try and eliminate uncertainty rather than embrace it.
SO, WHAT NOW?
I believe we need something more radical, and I would like to explore this with any interested folk. To my mind, the whole premise behind the traditional software development contract – nailing down the detail of what will be delivered, when and for what price, and covering both parties legally should things go wrong – needs to be addressed.
For customers to gain the benefits of Agile, and for suppliers to be able to deliver with Agile, working agreements and relationships that truly embrace an iterative, incremental and emergent approach to building software need to become the norm.
Working agreements that start from a position of trust rather than distrust. Working agreements that embrace “Here’s what we will build together” over “Here’s what you must deliver or we’ll sue you”. Working agreements that allow parties to use empirical process control to manage risk and uncertainty rather than up-front predictive models that make significant change impossible.
BEYOND CONTRACTS, TOWARD AGILE WORKING AGREEMENTS
How would I structure working agreements with customers that are compatible with Agile and #NoEstimates principles?
I think we need to move toward a model that embraces the following:
- All work is custom work
- Setting expectation level of quality for the customer’s budget or desired timeframe using an approach akin to a designer’s portfolio
- Focus on “build the right thing” rather than “build the thing we think is right”
- Replacing fixed price with incremental pricing (including “cut the cord at any time”)
- Replacing fixed scope with required outcomes (which we are happy to change)
- Reducing risk by using true iterative development cycles (Build – Measure – Learn) and empirical process control to evaluate progress toward outcomes
- Only thing nailed down up-front is an agreement that the supplier will continually deliver the most valuable small increment of product to the customer
- Don’t just welcome change – embrace changing requirements, via a change of mind or emergent learning, for the customer’s competitive advantage
DON’T BE CONSTRAINED BY THE CONTRACT
I’d like to explore this topic by starting with the following assertions:
- Traditional software contracts render attempts to deliver in an Agile way moot
- Empiricism and iteration are extremely beneficial to delivering successful projects, but are absent in both traditional and (previous attempts at) “Agile” contracts
- Agile working agreements must allow parties to benefit from the risk management and other advantages that empiricism and iteration provide over scope/price-driven contracts
- Existing/previous ideas for Agile contracts don’t quite fit the bill
Who would like to join the discussion?
Read about it on Redbubble’s tech blog.
Was a great 90 minutes of learning
Recently I’ve been helping out a friend with some questions about how he and his team can deliver value more quickly in his organisation, which works with a traditional SDLC.
He describes the process as “pretty much solid waterfall” and finds it hard to see how Agile methodologies would allow them to deliver benefit early. Most of their projects are, as he describes it, like an “iceberg”, with huge amounts of data integration and enrichment invisible below the surface before the internal customer finally sees the “tip” with a report or a dashboard/OLAP cube.
The requirements are fixed, and the customer is not interested in seeing partially finished solutions. They want it ALL, or it is not deemed valuable to them.
My friend envisages delivering multiple sprints with the customer getting decidedly impatient because they still can’t see anything useful. And then eventually in one of the later sprints the customer finally gets their dashboard. This doesn’t really solve their problem.
He is also interested in the #NoEstimates debate as the team seems to spend a lot of time estimating effort and, other than the estimates being used to raise a purchase order, the process doesn’t really make any sense to him. He wants to consider alternatives that can be used to give some scale to a project and enable the customer to raise the purchase order to get the work authorised. He wonders if this might be difficult to implement as the organisation is so cost (money) focused, but he at least wants to be able to float a few ideas.
He says there was a request from upper management for the delivery team to be more “agile” but, given the lack of desire for iterative/incremental delivery from anyone outside the team, this request seems to infer the old classic “we want our team to be agile” meaning “we want our team to deliver more quickly but don’t want to bother ourselves with sprint reviews every 2 weeks to give feedback on what they’re building“.
I explained to my friend that these kinds of cultural issues are not easily or quickly resolved. It sounds cliche but Agile transitions are a journey and require a buy-in across the organisation that things need to change, and a willingness to be open to do this. I told him that he will find himself in a frustrating bind if he tries to take a bottom-up approach to agility. Those that are failing to see the value in an early and iterative approach to the delivery of features are the ones that need to understand the benefits before any lasting improvements to effectiveness can be made.
This feeds into estimation too. If he can’t work iteratively, delivering in small chunks, he can’t really empirically measure progress toward the goal, leaving him with no choice but to be predictive and deterministic. If he can’t measure progress it’s difficult to judge if he’s “on track”, and will find himself continually estimating remaining work in order to update the project’s status.
So what can my friend do in this situation?
One thing he can do is to try and continuously deliver vertical slices regardless of whether the customer is interested or not. Put the system into a production-like demo environment. If the customer doesn’t want to look at it, no worries. It will at least allow the development team to iterate in their thinking over the design of the solution, and to measure their progress against a T-shirt sized backlog of work.
Something along the lines of:
- Here’s the backlog of work that needs to be done (try to make this goal-based rather than solution-based; solution can be agreed Just-In-Time)
- If absolutely required to approve the purchase order before the work starts, give an up-front estimate range based on how long other similar projects took and Maynard Keynes’ premise “it’s better to be roughly right than precisely wrong“
- Split “stories” into S, M, L and XL by comparing each to one another (relative sizing) rather than trying to determine how long something will take, i.e. “A seems bigger than B, C even bigger, D is about the same as C“, etc.
- Build one story from each size bucket (while delivering vertical slices if possible) to start building empirical data about how long stories within each bucket might take; from this a predicted end date can be extrapolated
- Put every new story that emerges into one of the size buckets by comparing it to a story that has not yet been built
- The “not yet been built” bit is important so that estimators don’t get influenced by how long a particular story took
- There will be natural variation around how long things will take, so to reap the benefits of relative sizing and empirical process control it is important not to judge the size of a task based on the result of a statistical outlier
- I wrote a blog post about this recently
- The more stories that are delivered the better the data will become – it will take several stories before a settled delivery rate and thinner cone of uncertainty is achieved
- Be transparent; Update the data and the stakeholders regularly, and warn early if the data is showing a high risk of not delivering within the allocated budget so that the appropriate steps can be taken and there are no surprises late in the game
I’ve written previously about the “barriers to entry” for working with #NoEstimates, and my friend is certainly encountering some of these barriers. However, there are always steps we can take to improve the way we do things, and sometimes these improvements influence others in a positive way.
It’s easy to just accept the perception that “they don’t want to change“, but mindful action in baby steps toward better can make a huge difference.
Here is an excerpt from an email I received from a nice chap recently:
Finally I took the time to read all your posts on #NoEstimates and I must say it was really good One thing that I didn't understand was how you handle the start of project with no time or budget constraints? Say we have a customer who wants to build a flight booking system and are now wondering how much it costs and when it will be done? They have no desired delivery date but asap is good and they need to know how much money it will cost so they can give a budget proposal to their boss. Here it seems like we have no real constraints to use so how would you handle this situation with #NoEstimates? Also another situation that the customer maybe want this and say they have a budget for $10,000 then we "know" it's too little to get anything useful. Is it "ok" to say that "estimate" based on our experience?
I thought it would be interesting for my other #NoEstimates readers to turn my answers into a short blog post
To the 1st question: I would be surprised if the customer genuinely doesn’t have a budget constraint. If we have built flight booking systems before, I would tell them how much those ones cost, to give them options. I would be wanting to know from the customer how much they want to spend.
If we haven’t built flight booking systems before, I would be wondering why the customer has chosen us over someone who builds flight booking systems! If they really want to work with us then I would be explaining the way we work with our customers, i.e. an iterative, collaborative approach with flexible pricing, but I would insist on knowing their maximum budget.
If we want to be in the business of building flight booking systems then I might take a hit on the first customer, i.e. if after working with the customer closely for, say, 2 weeks we realise that we can’t possibly build a flight booking system for their budget then they can look elsewhere, at no cost to them. It will be valuable learning for us because now we’ll know how much we *can’t* build a flight booking system for, given our capacity.
Obviously there’s no one right answer to this, and there are so many “it depends” variables, but this is one approach. We need to learn our capabilities by doing stuff and making empirical judgements, not guess how much we can do and by when.
To the second question, I’ve probably already answered it. If we’ve never built a flight booking system for anywhere near $10k then we “know” we can’t do it, or at least can’t guarantee it, given our current capacity. Call this an estimate if you like. If we’ve never built one at all then my answer above applies.
I always find it interesting that in software we tend to take a reverse approach to budgeting as we do with our purchasing decisions in life. Instead of deciding our budget based on how much we have available, or are willing to spend, we decide it using an estimate from the supplier of how much they tell us the software that we want will cost.
Or at least we give the illusion that’s how we are deciding it. In reality, I believe we typically know how much we want to spend. We are simply looking to the cheapest bidder in order to try and save money or squeeze as much as we can for our real budget.
My view is that we will come to much better outcomes for our customers, and ourselves, if we actually work iteratively and collaboratively with the customers’ real budgetary and/or time constraints.
One of my frustrations as a software practitioner is our seemingly programmed human bias toward keeping the status quo.
I guess it wouldn’t be so bad if the status quo was actually something approaching effective, inspiring or at least motivating. But unfortunately the reality for many (most) people making their living in the crazy (in a bad way) world of software development remains one of boredom, dysfunction, wasting time on unimportant things, going along with stupid decisions (or lack of them), stress, hatred of Mondays, being put in our place by our “superiors”, et cetera, et cetera.
23,858 tweets and counting. Worthwhile or a colossal waste of time?
I tweeted this yesterday. Often I wonder why I stay in an industry that suffers from the afflictions listed above. My work mood swings from utter dejection to tremendous elation. Like the software we create, the variability in my mental state is subject to wild fluctuations.
Here’s the thing. The reason I do this; the reason I stay in the industry, tweet opinions, tips and debate; the reason I write these blog posts; the reason I give a significant portion of my time freely, mostly at my own cost, to talk at meetup groups, conferences and company brown-bag lunches; is…
Because I want to play a small part in creating a better world of work for those involved in software development.
Particularly developers, who I believe have been treated for years like some kind of underclass in organisations of all sizes and industries. Crammed like sardines into some dark, dingy corner of the building, given to-the-letter specifications of some crappy software system that will keep them busy for a few months and then will never be used by a soul. Forced to commit to an estimate of how long this will all take (minus whatever needs to be trimmed off because the estimate doesn’t fit into the already agreed timelines). Constantly being micro-managed and asked “why is this taking so long?” and “why is this so hard?”.
Yes, I’m angry about this. And I want things to change. So I’m trying to do that in my own little way.
I want us to start treating smart, motivated people with the respect they deserve – right from the moment we hire them. Why on earth companies put engineers through 3 or 4 rounds of interviews and then fail to actually trust them once they get the job is beyond me. Managers continue to spoon feed solutions to their subordinates because they “can’t be trusted” to solve business problems quickly and efficiently enough.
This is why I am challenging the status quo in our industry. Sometimes what I write or say is found provocative by some. One dimensional. Context-less. “It depends on the context”, people say. “There’s no one right way. No advice is universal.”
I get disappointed (sometimes annoyed) when people who have never met me and know nothing about my professional reputation and abilities confuse what I tweet as “professional advice”, and then start questioning my integrity and ability as a consultant. It is hypocritical and way off the mark.
The reason why people write blog posts with provocative titles, and tweet with controversial hashtags, is because it is interesting. It invites conversation and debate. It stirs things up a bit. God knows (and so should the rest of us) that this industry is in dire need of some stirring up.
I was questioned by a couple of people about a tweet I wrote recently:
In fact my tip is NEVER do a MoSCoW prioritisation. The implied fixing of scope makes change very difficult. Order things instead. #Agile
A tweet, I might add, that was retweeted dozens of times, so obviously resonated with many.
I was told that my opinion was “unjustified”. That I shouldn’t make “categorical statements”. That “never is a long time”. That some poor soul may take my advice (assuming a tweet constitutes professional advice?!) and destroy a project because I am uninformed about their “context”.
I am constantly told the same kind of things about the #NoEstimates debate. That I can’t tell people not to estimate because I don’t know their context. Their boss might need estimates. Sometimes we need them, sometimes we don’t. Et cetera, et cetera.
With all due respect to these people, they are completely missing the point. For a start, I think it’s ridiculous to suggest that people would read a tweet from little old me and that would somehow create a chain of events that would destroy a project. Even if I were someone with anywhere near the influence and expertise of the great Ron Jeffries or Kent Beck, I don’t think I would yield that kind of power over people.
I do not use Twitter to dish out free professional advice. It is a forum for opinion, conversation and debate. Well written tweets resonate with people in some way, such that they retweet them, favourite them or, preferably, start conversations about them.
Perhaps reading a tweet like the one above will encourage someone to think a bit more about a practice that they have always done without question. To look into alternative ways of organising and prioritising work. To completely reject what I’m saying. Good tweets create a reaction, and whether this reaction is an angry disagreement or a nodding of the head, it has done its job.
Twitter is not to be taken too seriously, but the conversations it can create are serious and, I believe, are helping us as an industry to increasingly question long established practices. This can help us improve the way we work. The way we think. It is vitally important for us to have our world view challenged on a regular basis. This is how we learn and evolve.
I don’t just want to read tweets saying that “it depends on context”. Stuff that confirms my world view. Stuff that I agree with all the time. If every piece of advice or opinion “depends on context” then we might as well just give up trying to improve things.
Depending on your context, you might want to consider alternatives to MoSCoW prioritisation. However, if it works for you then fine, just keep on doing it.
Politically correct, perhaps, but it’s not exactly going to give me a reaction. I’ll probably not even notice that tweet on my timeline. “Be happy”. Ooh, can’t say that, it depends on context.
Moving away from social media for a second and into the real world of professional coaching and consulting – As Agile coaches I believe we can do much, much more for our clients. If someone tells me that I’m being unprofessional for suggesting better alternatives than MoSCoW then we are on different planes, I’m afraid. I know that there are certain principles and practices that have proved effective for me time and time again.
I’m not alone on this. I believe some statements are universally applicable, regardless of context. Questioning the way we do things doesn’t depend on context. Respecting each other and striving to work more collaboratively doesn’t depend on context. Adopting good engineering practices will help you to deliver incrementally and iteratively at a constant pace over time – this is universally applicable also.
Of course context is important – to me that’s so obvious that I can’t believe people keep saying it. We know that. It goes without saying.
But it’s not the point. The point is that many, many companies are still struggling to grasp the principles and practices that we in the Agile and Lean community know can increase effectiveness. Our clients deserve better advice from us than “well, if that’s working for you then keep on doing it”. We all know that something “working” is a perception and may actually be destroying the morale of the employees, or even putting the business as a whole at risk.
It is not “professional” for us to keep playing the context card. We need to be bold in our decisions and advice giving. Take risks. Challenge the status quo. Encourage innovation, not just of products but of process also. Be a true change agent, not just blend into the environment.
If you like what I tweet and blog, that’s wonderful so please do keep following! If you don’t like it, please unfollow. Twitter is wonderful because it is the ultimate pull system. If we don’t like what we see we can block and unfollow. We can filter out content that doesn’t interest us. It’s brilliant. And I shall continue to use it to challenge, provoke and generate conversation and debate. I cannot begin to measure how much I have learned and evolved my thinking thanks to conversations on, or starting on, Twitter. I’m pretty sure others will say the same.
And I will continue to help clients, in their context, get better whilst trying to create happy and humane workplaces. I want to live in a world where people enjoy going to work. It’s time away from our family and friends, and we spend most of our time there, so for God’s sake if we’re not enjoying it then what are we doing?
I don’t get it right all the time. Probably not even most of the time. But I do this because I care. I will continue to risk getting lambasted by people and losing the respect of gurus and experts. Like the rest of us, I don’t know it all – far from it. But I do not learn by being uncontroversial and not pushing the boundaries of what I believe or how I think things should work.
Thanks for listening
Note: I will write a follow-up post about MoSCoW prioritisation itself. Aside from the fact that it perpetuates the myth of “requirements” (if something is not a “must-have” then how can it be a requirement?), I’m not including my further ideas on the topic here because it’s not really what this post is about.
Many have already written about the damage it can do and some better alternatives to set you on the road to delivering a successful project (read building a successful product). For starters, Joakim Holm wrote a great post about it the other day. And there’s lots more to investigate using our friend Google!
I am privileged to publish this guest post from the brilliant and lovely Michael Rembach (@mrembach).
In October I stumbled across a blog article about product development using Scrum and the hindering effect that Scrum can have on the innovation process especially if the organisation is fully ‘agile immersed’. The blog was written by Brian de Haaf (@bdehaaff) who is the co-founder of Aha! – A product management software company. While the article was well written and brought up many salient points about innovation, I disagree with the overall premise that Scrum may have innovation-limiting behaviours You can read the original article here zite.to/17HnE4S .
The first thing I’d like to point out is that I agree with the points about innovation in the article. Innovation practices, such as having a shared vision, engendering trust in your organisation and having a strategic direction are all vital ingredients for success and even more so in technology companies. The thing about innovation is that it’s a cultural thing and no framework/methodology/philosophy in the world is going to make your company innovative without the desire (or need) to. Having a myopic view of your product because you’re ‘Agile’ misses the point of the delivery focus and discounts the innovation-enabling practices that Agile encourages.
Scrum, and other Agile methodologies, are essentially delivery focussed which is why there is a requirement for product owners to focus strongly on the Sprint cycle and the short-term delivery timeline that it brings. However, this does not and should not excuse the product owner for not checking that what is being delivered is aligned to the strategic goals for the product or in fact, the organisation. The two aren’t mutually exclusive and a product owner is responsible for communicating that vision to the project team so that they are aware of the purpose of the product. Constantly checking in with the vision by all the team should ensure that what is being built doesn’t deviate from the intention of the product’s purpose. The product owner is simply not performing her role properly if she suffers from the myopic concern with delivery-cycles without also ensuring that the product is meeting its intended strategic objectives.
Rather than inhibiting innovation, I posit that Agile has a number of practices that encourage innovative behaviour:
- MVP – the primary reason for creating a minimum viable product is to determine that what you’re trying to produce is viable, but it also serves a couple of other important purposes. The first is prototyping; where you have the opportunity to experiment with your solution, try something small and novel and see if it works and the second; it gives you the opportunity to solicit feedback from your clients, the product ecosystem and anywhere else. This is a primary source of knowledge for decision-making.
- Fast-failure – Agile methodologies allow you to fail quickly and learn some valuable lessons before it costs you too much. Innovation is all about finding out new ways to do things and failing fast and safely is one of the best ways to forge new paths.
- Continuous learning through retrospectives – a learning organisation is an innovative organisation and retrospectives provide an excellent opportunity to improve not only what we are producing (again, you can look at the strategic alignment at the end of every sprint or release cycle), but also how we work together.
- Embracing change – if making changes to your product is painful then your ability to be innovative will be too. Agile methodologies accept that change is inevitable from the get go and therefore provide less resistance to innovating during the development of a product.
Innovation is difficult at the best of times. As Clayton Christensen illustrates in his famous Innovator’s dilemma, history is filled with the burnt out shell of successful companies that died as a result of not being able to change. To succeed, innovation needs to be part of the organisations culture. The premise that progressive change-embracing frameworks like Scrum inhibit innovation does not recognise these aforementioned practices. Agile won’t make you innovative, but it sure can help encourage it.
Everyone had fun and was intensely engaged throughout. There were loads of interesting dynamics emerging from the teams, perhaps surprising given the contrived nature of the experiment.
- We set up three same-sized (10-12 people) teams, each with:
- an identical jigsaw puzzle (way too big to be completed)
- a Product Owner (to provide the vision and direction) and
- a Scrum Master (to help the team achieve the PO’s vision)
- We opted for 3 * 15-minute iterations, with 3 minutes for a Retro in between
- Each team was told to use a different method – one was a Scrum team, one was a “mob team” and one was a “no rules” team. Here’s what that meant:
- Must have Planning (including estimation), Review and Retro in each iteration
- We provided Planning Poker cards for the estimation but the team was free to choose whatever estimation method they liked
- Must only work on “stories” agreed in Planning – new stories can’t be introduced mid-iteration
- Stories are only “done” when PO accepts them (in Review or before)
- No formal ceremonies required
- Team all works on one story at a time until “done” (single-piece flow approach)
- No estimation
- Retro encouraged but not “enforced”
“No Rules” team
- Can work like the Scrum team, the Mob team, any combination of the two, or any other way they like
- Scrum team delivered most stories (3; the other teams delivered 2 each)
- Whole group was asked to vote on which they thought was the best outcome
- “No rules” team won (emphatically)
- Scrum team lost
Here are some empirical observations of the evening’s events and outcomes, along with my interpretation of what they indicate in an Agile/#NoEstimates context (==> in bold italics underneath the observation).
- Delivered most in terms of stories but least in terms of value, both for their Product Owner and as voted for by the wider group
==> Output <> Value
==> Comparing teams in a useful way would require consistent measures of both effort and value velocity across teams
- Spent far too large a proportion of time (particularly the first iteration) in planning, and needed to be alerted to this fact
==> Consistent timeboxing is important to ensure there is time to do all that is required, and for less variability of outcomes
- A member of the team openly admitted that he inflated an estimate because he did not agree with the value of the story that the PO wanted to do next
==> Estimates are often gamed, and for various reasons
“No rules” team
- Implicitly chose not to estimate, but instead to maximise the time they had for building
- Eventually delighted their Product Owner (and wider group), but during the game the PO felt like:
- The approach to delivery was too ad-hoc, even chaotic, especially at the beginning
==> Teams must collaborate in order to be co-ordinated, improve and deliver the right outcomes
- Stories were too large (epic) so delivery all happened near the end rather than incrementally
==> Smaller stories have lower variability and can help with early and frequent delivery, creating better predictability for PO/customer and lessening the need for estimates
==> Larger, higher variability stories rely on estimates of time, or at least relative size, to provide the illusion of predictability
- The approach to delivery was too ad-hoc, even chaotic, especially at the beginning
- Started with no process at all but this was deemed unproductive (with such a big team), so they split into smaller teams with focused goals
==> Smaller teams are more effective because it is easier to collaborate, change direction, gain consensus, etc.
- Scrum and Mob team both delivered purely incrementally (concentrating on edges) rather than iteratively (identifying a recognisable area of interest and building upon it), although stories were clearly too big
==> An iterative approach is crucial for risk management, predictability and delivering the right thing (value), i.e. without such an approach you have no choice but to estimate
- Product Owners all felt like they weren’t being listened to – this had particularly bad consequences for the Scrum and Mob teams, perhaps due to their purely incremental approach
==> Important for all team voices to be heard, especially given the PO is driving what should be built in order to deliver on the vision
As with many simple and now commonplace “Agile practices”, debates still rage on about the Daily Standup (Scrum) meeting, a meeting which has somehow become a ritualistic signal that a team is “Agile” but is often an equally conspicuous signal of the exact opposite.
I’ve been in many organisations where God forbid anyone asks whether we should get rid of the meeting, or even change it, despite the fact that no one is getting any value out of it every single goddamn day*.
*Except some managers. A daily status update meeting? Terrific! The Daily Standup is an opportunity to micro-manage people every single day without having to approach their desks!
I digress. The point is, people still question the value of the Daily Standup and, if it is indeed valuable, how we might make it more effective.
I share the view of the Scrum Guide on this – at least in what the spirit of an effective Daily Standup meeting is, if not necessarily the prescribed format.
An effective Daily Standup meeting, for me, is one in which the team inspects and adapts both product and process.
That is to say it is an alignment meeting. A daily planning meeting. An opportunity to change our path if there is a better one. We do not have to (and should not) wait for the Sprint Review (product) and Retrospective (process) for this. Continuous improvement is about daily inspection and adaptation.
Here are some of the more effective questions that can be used in a Daily Standup meeting:
- How will we work together today to move toward our goal?
- What should we focus on today?
- What should we not do that we originally thought we would do?
- How will we remove this impediment right now?
- Given we are a little behind, how might we simplify this product increment?
It is about purposeful intent for the day. It is certainly not intended as a status meeting. If managers and others outside of the core team are not getting the information they require from conversations or the team wall then it will surely pay dividends to improve visibility and transparency in the way people interact while doing their work rather than have a daily status update meeting.
In fact, I would go as far as saying that the ritual of an unchanging Daily Standup meeting is usually a smell of poor collaboration in and between teams on the actual work to be done. Some companies mistake this meeting as a way of actually getting people to collaborate. It’s almost as if they think that the benefits of collaboration, as Agile promotes, can be gleaned simply by having this meeting.
Unfortunately it is not that simple. Standing (or sitting) people together does not make them collaborate.
Collaboration is an organic thing and only comes if the “way the work works” is designed to encourage it.
I sometimes see or hear the argument that, “because we’re Agile we should make the meeting fit with the way we currently work“, and that doing this will intrinsically make it more valuable. So, the argument continues, it’s OK if it becomes a status update meeting because that’s what the environment demands.
The issue with this approach is that the environment in which you currently operate is likely one of managers wanting status updates. One of traditional ways of doing things.
But in order to be effective with an Agile approach we have to do things differently. To think differently.
Agile does not mean “make compromises”. It is about mindful changes in the way we work to move toward improved effectiveness. If something feels a bit different and uncomfortable then it may well be a sign you are on the right track.
As coaches, we ought to let the team decide how they can get most value from a Daily Standup meeting. Then, rather than focusing all our attention on how to improve the meeting, we should instead be helping the managers create an environment in which actual collaboration (working together effectively toward common goals) is encouraged and starts to feel natural.
Where excellence, rather than dogma, can prevail.
P.S. Standing up is not mandatory! If the meeting is timeboxed to 15 minutes then it will be quick regardless of whether you’re sitting down, standing up or doing the cha-cha.
Next week I am speaking at a SIGiST (Specialist Group in Software Testing) event in Melbourne. Having to prepare my presentation has encouraged (OK, forced) me over the past couple of weeks to re-immerse myself in the world of quality, testing and BDD (Behaviour Driven Development).
Despite everything we’ve learned about the value of conversations when deciding what to build into our software — about the value of automating as much of our testing as possible in order to shorten the feedback loops between things breaking and us knowing about them breaking, and to instill confidence among the stakeholders and the team that we can rapidly add new features without breaking existing ones; about the value of taking a test driven approach to building our software, based on real user behaviour rather than code behaviour, to enforce good design practices and ensure the software does what it is supposed to do — I still constantly see and hear of teams struggling with their approach to quality.
Some are struggling to find time to improve due to a combination of legacy systems with brittle or no automated test coverage and looming deadlines for new products or features. Some are struggling to create a short enough feedback loop for testing software increments as they are built so that problems can be addressed before code is deployed, or before developers have moved on to the next or even the next feature.
There is no denying that it is crucial to get the technical practices right from the start. Enough has been written about this. BDD at all layers, continuous integration and automated acceptance and regression tests.
However, when you find yourself in a situation where you are adopting a legacy system or process – i.e. you or your predecessors haven’t got your technical practices right from the start – then your only viable option will usually be to improve things gradually. Have developers learn how to and implement automated acceptance tests. Chuck out and replace flaky record-and-play UI tests with robust unit, integration and browser tests using best-of-breed tools. Embed testers in the development team. Gradually start to do all the things that ideally would have been done from the start.
It seems like a desperate situation, but all is not lost. Far from it. I feel that a common mistake teams and businesses make is to place too much focus too early on the necessary technical improvements.
In my experience, the most important thing to improve is the conversations between the business people, customers and the development team.
One effective technique for doing this is The Three Amigos approach, where the customer / Product Owner / BA has a chat with a developer and tester from the team to agree on the acceptance criteria for a new feature or story before it is undertaken. From this conversation the team can decide exactly what tests are needed, and where they should be implemented, in order to prove that the completed functionality will do what is supposed to do.
A mature Agile team would now write the necessary tests in their tool of choice (e.g. JBehave for Java), the developers would write just enough code for the tests to pass, then refactor. When all the acceptance tests pass, the story is considered “done” from a functional perspective.
But what if the tester and/or developers have little or no experience with an automated testing approach? I have worked with teams in this situation and it cannot be fixed right away (or even at all if there is no willingness from the business to invest in training and slack time to address the problem).
Let’s say the tester is traditional in his approach, and would typically create test cases which he will use to manually test the code when it comes to him from the developer. What tends to happen here is that the developer writes the code for the story, then hands it off to the tester, who then hands it back because the code doesn’t do what the tester expects it to do. This to-ing and fro-ing can happen once, twice, three times. It’s time consuming and frustrating for everyone, and makes it very difficult to complete product increments in a timely fashion.
However, if the tester and the developer have a conversation before the developer starts coding (with the PO/BA in the Three Amigos meeting, or just-in-time in a story kick-off), the tester can take the developer through his test cases (derived from the acceptance criteria) so that the developer understands everything that the tester expects to work when he is handed the code.
Over time in these conversations the developer will start making suggestions, so the test cases become more collaborative and thus effective. He will also want to make sure the story does not bounce back to him from the tester when he’s coded it, so he may do some more manual testing of the functionality or even write some (more) unit tests before handing the story to the tester. His confidence in his code is likely to have improved, and the bounce-backs become the exception rather than the rule.
The key to building in quality is first and foremost in the conversations because they create improvements in the way we work together, whatever situation we are in technically. The good technical practices will emerge from the better conversations. Agile is largely about focusing on technical excellence but, as the first line in the Manifesto tells us, more important is the interactions between the people doing the work. Continuous improvement allows us to start where we are and take one step at a time.
These up front and ongoing conversations, such as the Three Amigos, can have a massive impact on your effectiveness both individually and as a team, and on the quality and maintainability of your product, increasing your agility to adapt and innovate . Adding such conversations to your process is a great sign of continuous improvement and embracing the first and most important line of the Agile Manifesto.
"Various projective practices upon trending have been used to forecast progress, like burndowns, burn-ups, or cumulative flows. These have proven useful. However, these do not replace the importance of empiricism. In complex environments, what will happen is unknown. Only what has happened may be used for forward-looking decision-making." -- Scrum Guide
Agile/Scrum teams are often asked to estimate how long a release might take. Or an entire project. Sometimes this is done under the guise of relative size estimates like T-shirt sizes – or, perhaps more commonly, story points – coupled with an estimated (or guessed) velocity. This is sometimes done even with new teams that have no velocity history.
Scrum, as defined in the Scrum Guide, places a large emphasis on the use of empiricism. Aside from the quote above, the following nuggets can also be found:
"Scrum is founded on empirical process control theory, or empiricism. Empiricism asserts that knowledge comes from experience and making decisions based on what is known. Scrum employs an iterative, incremental approach to optimize predictability and control risk."
"[The Scrum Master helps the Product Owner with] Understanding product planning in an empirical environment"
My interpretation of Scrum is that, while the Development Team are expected to estimate each PBI (Product Backlog Item), they are not asked nor expected to determine delivery dates, or how much work will be completed by a delivery date.
At Sprint Review:
"The Product Owner discusses the Product Backlog as it stands. He or she projects likely completion dates based on progress to date (if needed)"
So, the Product Owner uses the estimates on the PBIs combined with the empirical knowledge gained from what has actually been done to determine completion dates of a set of PBIs (e.g. a release). At no point does the Product Owner ask the team what will get done (beyond the current Sprint).
This use of empiricism is often neglected by Scrum teams. Teams are asked to project release dates, sometimes several months out, without any velocity history. This is not making projections based on what has actually happened. It is not empirical, and does not work in a complex, ever changing environment.
"A Product Backlog is never complete. The earliest development of it only lays out the initially known and best-understood requirements. The Product Backlog evolves as the product and the environment in which it will be used evolves. The Product Backlog is dynamic; it constantly changes to identify what the product needs to be appropriate, competitive, and useful. As long as a product exists, its Product Backlog also exists."
If you are using estimates, it is important that you use probabilistic estimates based on real, empirical data. Scrum suggests this. Practitioners suggest this also. Don’t ask the team to forecast any further out than the current Sprint. As the Product Owner, use real data to make forecasts and decisions. Asking the team to make longer term projections is not respecting the data showing what is actually getting done.
Monitor progress, don’t try and predict it.
Envisage this situation. I go to bed and forget to let the dogs out. When I wake in the morning, and go into the living room, the dogs have crapped on the rug.
Who is to blame?
My first reaction is likely to be to blame the dogs. “Sammy! Jake! You dirty dogs!”
My wife will likely blame me (once she finds out I didn’t let Sammy and Jake out).
Well, I don’t want that happening again. How can I make sure I don’t forget to let the dogs out again? Another foul up (forgive the pun) will be difficult to take.
Perhaps I could put a sign up on the wall in the landing, on the way to my bedroom: “DON’T FORGET TO LET THE DOGS OUT!” Won’t be foolproof, but it might help. My wife might decide she can’t trust me to let the dogs out every evening, so she will start reminding me every night, or coming into the living room to check.
Of course she might forget to do this one night. If that happens to coincide with a night on which I also forget, the same outcome may occur.
Now who’s to blame?
This kind of scenario might sound oddly familiar if you work in an IT department or work for a software development company. An innocent mistake (like releasing an obscure but potentially damaging bug), leading to blame of the individual, leading to more control of releases (processes and procedures) and a “don’t fuck up” culture.
Of course we don’t want the dogs to crap on the rug. Blaming me for this incident, imposing more control (the sign on the wall) and reducing trust in me (my wife checking I’ve put the dogs out) *may* solve the problem. But in reality there is still a chance that it will happen again. People make mistakes. People repeat mistakes.
By employing a systems thinking approach to this scenario, we can look to *dissolve* the problem. That is, the problem of “the dogs might crap on the rug during the night” is actually removed rather than its probability reduced.
If I install a doggy door, the dogs can get in and out whenever they need to, so they will never be stuck inside when they need to crap. My wife will never have to worry about me messing up again, and blaming me for my stupidity. We won’t need signs up on the wall, serving as a constant reminder to myself and my family that I messed up.
Sometimes buggy software will be released, no matter how high the quality of our code or the stringency of our release procedures. Because people miss things. People make mistakes. People repeat mistakes.
If we make releasing really quick and easy, we can update our tests and release bug fixes before there is any time for blame and increased control to become necessary.
Do you look to merely solve problems in your organisation, or to dissolve them?
This is the first in a series of small posts aimed at new Scrum teams, organisations newly adopting Scrum and people who have been doing Scrum for a while but are struggling to get the results they crave.
This post is based on a response I gave to a question in a LinkedIn forum:
“The BA role is an integral and implicit part of Product Owner Role in Scrum. What is your take on this?”
This is a very common question among those new to Scrum and Agile. It’s an interesting one and a classic example of why, in my opinion, companies the world over are failing to do well with Scrum.
To begin to answer it, I will let the Scrum Guide do the talking:
- The Scrum Team consists of a Product Owner, the Development Team, and a Scrum Master.
- Scrum Teams are self-organizing and cross-functional.
- The Product Owner is the sole person responsible for managing the Product Backlog.
- The Product Owner is one person, not a committee.
- Development Teams are cross-functional, with all of the skills as a team necessary to create a product Increment;
- Scrum recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there are no exceptions to this rule;
- Scrum recognizes no sub-teams in the Development Team, regardless of particular domains that need to be addressed like testing or business analysis; there are no exceptions to this rule; and,
- Individual Development Team members may have specialized skills and areas of focus, but accountability belongs to the Development Team as a whole.
Departmental silos are entrenched in the way companies typically do things. They are part of the system. The culture. As a result, the urge to maintain departmental silos is strong.
I would suggest this is a key reason why Scrum implementations might (and do) fail.
Straight off the bat, certain elements of the Scrum Guide are typically ignored or deliberately rejected. These elements may or may not turn out to be key in your organisation, but the fact is they are in there for very good reason. It is a mistake to assume from the outset that your context requires removal of these elements.
Scrum is not asking companies to remove departmental silos, but it is asking that these silos are ignored such that they do not exist within the Scrum team. In the Scrum team, everyone building the product increment is part of the Development team. There are only 2 other people in the team – the Product Owner and the Scrum Master. That’s it. That’s the Scrum team model. Period.
There is absolutely no prescription as to who should be in the Development Team, only that the team has all of the skills and capabilities required within it to build a product increment, and that the team jointly owns all of the work, activities and decisions. In order for effective teamwork to flourish, Scrum says that roles should be left at the door.
That does not mean that our individual expertise and experience is left at the door along with our job titles. On the contrary, the best self-organising teams decide how best to leverage the expertise within the team.
If the question asked in the LinkedIn discussion was actually:
“Are the typical activities undertaken as a BA part of the Product Owner’s responsibilities in Scrum?”
then my answer would be that these, and any other activities involved in building and managing a product’s development lifecycle end-to-end, are shared between the Scrum Master, Product Owner and Development Team. This is made very clear in the Scrum Guide.
To that end, there is no “BA role” in Scrum, much like there is no “tester”, “QA” or “UX designer” role. Roles are part of traditional siloed thinking. Scrum (and Agile) focus (deliberately and alternatively) on cross-functional teams. Roles are a function of the particular company, not the activities that need to be done as part of product development.
To get the best results from Scrum it is a good idea to stop thinking about what roles you need in the team, and instead think about what activities are required to build your product. A good self-organising Scrum team will share these activities regardless of whether they have a specialist, designated BA or not.
Personally I like to encourage “collaborative analysis”, where all of the “what” and “why” for every decision, every story, is talked about by the whole Scrum team. Then the “how” is handled by the Development Team.
The popular model of having BAs “writing stories” and handing them off to the developers in the team is highly ineffective, not the hallmarks of a collaborative, self-organising team and about as far from both Scrum and Agile as you can get.
To build products effectively with Scrum, it’s a good idea to map out all of the activities that are required to build the product. Forget current roles and responsibilities for now. Once you’ve listed the activities, gather a team that can execute those activities in their entirety. If your company has BAs and you need one of them for your Scrum team then by all means have them in the team.
But please remember to ask yourself this key question:
“Is the BA part of the Development Team or are they the Product Owner?”
In Scrum, they can’t be both. And they can’t be neither.
The SAFe approach to normalised story points makes a classic mistake that everyone seems to make with story points. It is not “relative sizing” to compare stories to a reference story that has been estimated in time (in this case “about a day”).
As soon as you introduce time as a basis for your reference story, and use e.g. story points with Fibonacci sequence, all of the comparisons you make are based on time, i.e. a 2 point story pertains to 2 days, 5 points to 5 days, etc.
Even if you are not doing this consciously you will do it unconsciously. So all you have done is estimated “how long” the stories will take to deliver. This is not relative sizing!
The whole point of using relative sizing instead of time-based estimation is that humans are better at comparing the size of things than we are about making absolute judgement of size, e.g. we’re good at being right that building A is bigger than building B, but we’re not so good at being right that building A is about 200 metres high and building B is 150 metres.
Unfortunately when it comes to tasks that we perform, our natural tendency is to use absolute terms because the “size” of a task essentially equates in our brains to “how long”. The fact that story points are numbers doesn’t help with this. Where story points completely lose their value is when we start deliberately equating a point value with a length of time.
True relative sizing of a backlog is to pick a low value story (one that you are unlikely to implement for some time) and do not estimate it at all. What you now do is compare other stories to that story, i.e. I think story C will take longer than story B, story D will take longer than story C, story D is about the same size as story C, etc. At no point do we actually predict how long something will take. We are simply saying which stories will take longer than others, by our estimation.
When a new story emerges you then do the same thing – decide if it will take longer than the reference story (which, because you have not yet implemented it, you will not be influenced by the actual time it took), less time or about the same.
You can now measure progress against the total backlog as you deliver the stories.
One thing I do agree with in the SAFe approach is that you should not do any re-calibration/estimation. As soon as you start re-estimating stories based on how long things are actually taking you are being influenced by time. This can not only throw off the relative calibration of the backlog but also ignores the inherent variability of software increments; i.e. there will be outliers within size groups that take significantly longer (or shorter) than the modal average.
P.S. If you’ve read my other #NoEstimates stuff on this blog you will know I do not advocate the use of story point estimations at all, especially due to the way they are typically misused and abused. However, there may be some potential value in doing relative size estimates (e.g. T-shirt sizes), if done right, for one or more teams working from the same initial product backlog in order to give some indication of the overall viability of the initiative and to provoke discussion within the team(s) about the value and possible approaches for undertaking individual pieces of work, aka “what shall we do next”.
A continuing theme of counter-arguments posed at the #NoEstimates ideas is that development cost estimates are required in order both to manage risk and to derive value.
This blog post intends to give further insights into how risk can be effectively managed, and how we might determine the value of our initiatives, without the need for making up front and deterministic development cost estimates.
“Risk is the probability of an unfavorable impact to the project” – Glen Alleman (@galleman).
From the risk angle, the argument goes along the lines that the built-in “risk management” in Agile approaches is not aligned with conventional definitions of risk management in software development.
I’ll go along with this. Agile (and #NoEstimates) does not take the conventional approach to software risk management, which sees project success as “on time, on budget” and thus requires an up front estimate of total scope, cost and duration.
Agile/#NoEstimates offers an alternative way to manage risk on projects (and, no, I’m not taking about Agile Estimation, the spin-off brand of traditional estimation promoted by Mike Cohn). I’ll explain more about this later.
The argument regarding value is that estimated cost is required to determine value, given that value is related both to the timing of when things are released and how much it costs to develop the things that will (potentially) generate value. That the worth of something to someone can only be evaluated if we know how much that things costs.
Again I agree to an extent, but there are two key sticking points for me here. One is that we only know how much software development costs after the fact. People say “we need to estimate because we need to know the cost”. Estimating, however accurately we think it is being done, does not allow us to know the cost.
Before the event we can only estimate what will be done and how much it will cost. In addition, the further out we are estimating cost and value, the riskier (and potentially costlier) our estimates become.
By estimating, rather than fixing, cost we have no greater insight into the value, which is also estimated. Essentially we are increasing our risk by estimating both cost and value rather than just value, which is what #NoEstimates promotes. More on this later.
The other sticking point is that value is often highly subjective and personal. I know how valuable a particular brand new Ferrari is, partly because I know how much it costs. That said, if you gave me two different Ferraris to test drive and didn’t tell me how much they cost, I would tell you which one I prefer. Which one was more valuable to me. This has nothing to do with the cost. The one I prefer might be significantly cheaper, but its value to me is higher because it’s more fun to drive and I prefer the look of it.
The same applies with software. There is so much to consider when we try and measure value. Aside from the empirical measure of monetary returns, we have to consider the needs of the customers, the stakeholders and our corporate strategy (to name but a few), not to mention the fact that all of these things change over time.
Agile is about delivering value early, not trying to predict how to maximise value over a given timeframe or a product’s lifecycle. It is the early delivery of value that allows us to tune and adjust our course for maximum longer term benefit.
This is why it is an alternative, and completely viable, approach and should be considered as such.
Agile Risk Management
The key aspects of Agile that help us manage risk effectively are:
- Continuous selection of highest value work (i.e. making decisions)
- Fixed, cross-functional teams with 100% focus on current project
- Early and frequent delivery of end-to-end working software increments and
- Empirical measures of progress toward goals
With Waterfall projects, the need for conventional risk management is clear. We have no way of measuring progress from day one in terms of working software because we are carrying out requirements analysis, specification and design phases before we write a line of code. People are often working on multiple projects and so we must allocate a percentage of their time to the project at hand.
The only way to measure percentage progress toward project completion is to have a breakdown of the SDLC phases and tasks within each, estimated in days/weeks, and tick them off as we go along. If we don’t complete all the necessary tasks for a given phase in the estimated timeframes, we are “off track” and we need to take corrective action.
With a phased delivery approach, the only way to manage risk is to have an estimate of the total scope, cost and duration of the project.
But if we are working in an Agile way, we are not taking a phased approach to project delivery. We are delivering full end-to-end working solutions in an iterative manner, early and frequently. We are working in fixed, cross-functional teams so teams costs are known and consistent.
This approach allows us to manage risk and measure progress toward project completion (meeting of stakeholder goals within a given budget) from the get-go.
If we are truly iterating by delivering vertical slices through the system, after our first iteration we will be able to measure progress toward the project goals. We will have delivered a working, albeit perhaps low quality, solution to the problem. We may even have actually met the project goals.
Either way, we can inspect what we have done and decide if we are on the right track. If we are, we can iterate over our solution, improving quality in the desired areas and incrementing new features. If we are not, or we see a better way of solving the problem, we can throw away what we’ve done and start again. We may even decide to scale up our efforts and add more teams, if there is emergent estimated value in doing so.
Given in Agile we are delivering end-to-end working software from the get-go, we are not burdened with the problems we faced in our Waterfall projects for measuring progress. We have the ability to empirically measure progress because we are delivering “done” functionality, as opposed to hitting pre-determined “milestones” which are not based on what we have actually delivered in terms of a working product.
In Waterfall, so long as we are hitting our milestones then the project status is “green”. For software product development projects, this means that we are deferring our risk management until we actually start writing code. We don’t know that the scope of what we want to build is achievable, and we can’t reduce scope until we actually realise it’s too much (well into the development phase, deep into the project).
In Agile we can manage scope right from the beginning, because we are continually focusing on building the most valuable thin, vertical slices which represent iterations over an end-to-end solution to the problem. We can empirically measure how much we got done and how much is left to do. We can regularly take proactive decisions to cut scope or switch to an alternative approach to improve our chances of delivering a successful outcome. What should we do next for maximum value and maximum impact in meeting our goals? What should we not do? What is the simplest approach for our next iteration?
This is risk management.
These kinds of conversations enable us to focus on doing the simplest thing, for maximum impact, given the budget that we have available. To not wait 9 months to deliver a solution but to deliver a solution in 1 month, then make it better.
Most “Agile” projects are not managing risk
If we decide up front in a project inception on the requirements (product backlog) and solution we will be sticking to, and estimate it will take, say, 9 months, all we will do is incrementally build the solution, usually in horizontal slices, components or modules.
After each “iteration” we will not have a holistic view of what we’re building.
This is a very common approach by “Agile” teams. In this situation we are deferring the management of risk until we actually have a system that can meet (some of) the needs of the project stakeholders, usually late in the game when the deadline is getting close.
This is not risk management. If we work in this way we cannot work with #NoEstimates.
How do we estimate value without estimating development cost?
OK, so assuming we have the capability and will to deliver vertical slices through a solution early and rapidly, and we have a fixed cross-functional team, 100% committed to the project at hand, we can focus on the potential value of the ideas we want to build while controlling cost using small “drips”.
When we use ROI to decide whether a project is worth pursuing, or which of 2 or more potentially valuable projects we should choose given limited people and resources, we base the “investment” measure on the estimated cost, of which the development costs are part, and the “return” is the value we expect to generate, measured on the same scale as the investment (usually money).
There is a flaw with this approach.
6 months, 2 years, it’s all the same!
Let’s say we estimate a project will take 6 months of development time, costing $500k. We expect that when the product is complete it will generate $2m in revenue. The timing of when that revenue gets generated is key. Will we get anything at all before the product is built in its entirety? Will there be a few months of marketing required after all the features are done before we will start seeing the cash rolling in?
The implication of the timing of value generation is that the actual ROI of what we’re building in a 6-month project might still be negative after 6 months of development time, even if we get everything done that we originally wanted done (and estimated).
Now compare that to, say, a project with an estimated duration of 2 years. After 6 months, the ROI of the two projects will be identical. Our net loss in both cases is $500k, so our ROI is -100%; we have spent half a million bucks with nothing (yet) to show for it.
So, given the erratic, inconsistent and numerous ways we can measure value in software, is the traditional ROI approach an ideal decision making model in this domain?
Agile is about early delivery of value, not trying to predict maximum value
The upshot of this is that the less risky approach to generating a positive “ROI” is to work on options that will potentially generate value early, i.e. with relatively small and simple effort. Put simply, if we prioritise initiatives by virtue of which ones we expect to generate value early rather than how much value they will generate over the product’s lifecycle then we do not need to batch these initiatives up into “projects” and estimate how long the project will take.
This can easily be reverse engineered. If our starting point is a “project”, with a list of requirements, the best thing we can do to manage risk (keep our decisions within the bounds of the near, more certain, future) and ensure we deliver value early is to pick the most valuable requirement/problem to solve and come up with a simple, creative approach to fulfilling that requirement in a very short timeframe.
What’s next? One at a time…
The team can go away for, say, 1 month, after which time we holistically assess where we’re at in terms of fulfilling that requirement. What have we learned? Is this requirement still the most valuable one to work on (ignoring sunk costs)? Are we better off ditching what we’ve done and investing in attacking another requirement?
Our measure of what is valuable must reset after each iteration. It’s irrelevant how much we’ve already spent (sunk cost fallacy).
We need to constantly concern ourselves with what is the most valuable thing to do next. This is Agile. This is #NoEstimates.
And this is risk management. Yes, it’s an approach that requires a different way of thinking about how we choose what work to invest in, how much to invest and the decisions we make along the way. But it is risk management nonetheless.
But we can’t do this when $200m is at stake!
The #NoEstimates debate has hit a point where the main remaining arguments are around its application in big money projects. Most of the original dissenters – who have now spent time reading more about the ideas put forward by myself and the other #NoEstimates crew – are now in agreement with us that, at least for small scale projects, we can get away with not doing “micro-estimates”, and indeed it may be preferable to work this way.
But when it comes to “macro-estimates” – i.e. how much of the customer’s money are we going to spend – it is argued that a #NoEstimates approach is not viable. That when “you are spending someone else’s money” you need a plan (estimated schedule) to ensure you deliver what is required for the money, with some deterministic level of confidence.
The irony of this argument is that when the big number guys come out swinging with their big numbers, these numbers are estimates! When we call a project that we haven’t yet completed, or even started, a “$200m project”, what we are actually saying is “our customer has a $200m budget and we have to deliver what they want for their money”. In other words, the decision has been made to go ahead, and the budget is $200m. There is no go/no-go decision to be made – it’s already been decided that the project is going ahead, and they want a result for $200m.
For me, with such large sums and timeframes at play, there is all the more reason to manage risk by drip funding small amounts and iterating over a solution in the way I’ve described. Scaling up where required. Tuning and adjusting.
The alternative is to manage risk by using probabilistic estimation techniques based on past projects such as Monte Carlo simulations to derive a total estimated cost with a confidence interval, and then constantly adjust these calculations as the project progresses. But I maintain that the Agile way, where we start from a budget or fixed deadline and then actively build and manage scope along the way, is preferable because it harnesses the creativity of designing and building great software and allows us to welcome and embrace change every step of the way.
Create the future rather than predict it
Instead of trying to nail down a plan and predict outcomes, we are forging our own future based on current market conditions at any given time, and the way we feel about what we’ve built so far. We are controlling our costs by working with fixed teams in short timeboxes, and we are constantly assessing the value of what we’re building.
If we work this way we do not need to estimate things up front. Empirical data is being generated as we go along, and we can look at the market with fresh eyes after each iteration. We can see what we’re getting done and what we’re not. We can change our mind on whether we care that we didn’t get the things done that we wanted to get done. We can see which of our assumptions were true and which were false. We can steer our ship in whichever direction we need to avoid the iceberg ahead, while remaining focused on the destination.
This is at the heart of #NoEstimates from my point of view. It is possible to work this way. It is not easy to get to a position where you are able to, but if you can get to that place it is, as Ron Jeffries describes it, “the best known way to work”.
Systems Thinking tells us that we are products of the system in which we operate. That we will perform based upon the ways we are being measured.
Personally I am astutely aware if the way I am being measured is also a target. I know the measure is not an effective way of helping me contribute to reaching the organisation’s goals.
But the thing I struggle to understand is that if we are gaming the system, and know we are doing so, at what point do our ethics kick in? What is our tipping point?
I once worked with a team that was battling against technical debt. Regression bugs were appearing with increasing frequency due to a lack of automated integration test coverage with legacy systems. My team wanted to do the right thing and fix the bugs that they found, despite the fact that it was not them who created the bugs, but were concerned that they were falling behind with their own work.
They assigned no blame to the unfortunate soul who checked in the code that caused the regression. In fact, they didn’t even get to find out who the culprit was until after time was already spend determining the cause of the bug. There was much complexity in the interactions between components and a gaping lack of integration tests across them. The team just wanted to fix the problem, add some appropriate tests to prevent the problem from happening again, and move on.
The problem for me was that this was impacting on our project schedule. The team were supposed to be working on stories for my project but instead were taking time working on bugs created by other teams. I was being measured on the delivery of the agreed scope in the agreed timeframe, not on our software delivery effectiveness across the portfolio. Surely it was in my best interest to ask the team not to work on other people’s bugs? My delivery schedule was being jeopardised. I would be held accountable for this. I would be asked tough questions. Why didn’t I deliver everything I said I would?
But here’s the thing. Despite how I am measured, I am passionate about creating good outcomes for the stakeholders, the customer and the company, not my specific project. I do not see the work to be done as a set of easily definable story cards. In this and other similar situations I wanted my team, and other teams, to spend time reducing technical debt across the board, improving code quality, collaborating with each other to find ways of making everyone’s lives easier, etc.
I can choose to let the system define me. To be a product of the system. Or I can choose to question things. To think holistically about how we can improve.
The system will reject this. But at least I can go to sleep at night knowing that I am doing what I believe is right.
How much do your ethics influence the decisions you make or don’t make in the workplace?
1. You’ve mentioned on Twitter that in your opinion, #NoEstimates = Agile + Real Options. For the curious newbie, what does this mean?
The approach I talk about is very much underpinned in Agile principles. In fact it’s what I believe Agile was intended to be at its core (although I’ve had some disagreement from the likes of Ron Jeffries and Alistair Cockburn on this point).
To summarise #NoEstimates from my point of view:
Constraints breed creativity
- Use real constraints to drive decisions, e.g. “this is how much we want to spend” or “we need something by June in time for Wimbledon”
- Arbitrary constraints (such as deadlines based on zero to low knowledge “estimates”) cause dysfunctional and ineffective behaviour
- Create mini-constraints (i.e. drip funded iterations) to promote a creative approach to what we are going to build to address the problem at hand
Build awesome teams
- Create fixed, capable teams so we know how much our time costs
- Scale up team capacity if enough positive value has emerged (by adding teams, not people to teams)
- Empower our teams to be bold and free in making solution choices, with focus on “building the right thing” and “delighting customers and stakeholders”
Keep our options open
- Cover multiple, potentially valuable options with small experiments rather than committing to one option per team for long periods
- Reassess options frequently to ensure initiative is still valuable (ignore sunken cost) and is more valuable than other options for which we could divert our team capacity
- Anything we haven’t yet built (e.g. our product backlog) is only an option – we shouldn’t assume we’ll build it and shouldn’t worry how “big” it is unless we actually want to do it now, or very soon
Put the “iterate” back into “iterations”!
- Truly iterate over the solution (holistic determination of where to take the product next) rather than just incrementing pre-determined backlog items
- Deliver early and frequently, with very small (even daily) feedback loops – this makes us predictable
Create collaborative working agreements
- Create flexible, collaborative working agreements with our customers which allow us to truly embrace change and deliver to customers’ present needs rather than their needs when we started
- Allow customer to cut the cord early if they are happy with what they have (or not happy with progress)
- Start from a position of trust rather than paranoia (which traditional contracts are based on)
Favour empiricism over guesswork
- Keep work items small and simple, and limit WIP to create a predictable system
- Slice features into simple, unambiguous stories using a heuristic rather than estimation rituals
- Price work per feature if appropriate, using empirical average cost of features to guide price rather than a deterministic estimate of individual features
- Use cycle time and throughput to make near-term prioritisation calls, not to determine release dates (there are no big releases in this approach anyway)
Shift focus away from estimation
- Create a culture of honesty by removing negative estimation culture (i.e. get rid of story points and the notion of estimates as promises or deadlines)
- Make work and project success about creative delivery of value (i.e. “what shall we do next?”) rather than “on time, on budget”, schedules, deadlines, etc.
2. Describe what you mean by a “slicing heuristic”
Essentially it’s a policy for how we break up our work. For example, “A user story must have only one acceptance test”. Rather than breaking features into stories and then estimating the stories, we can use the heuristic, measure our cycle times and then inspect and adapt the heuristic if required.
I’ve found the “1 acceptance test” heuristic to be consistently effective over different domains for creating an average story cycle time of 3 days or less.
3. How does your approach differ from that of Woody Zuill? Or, are there more similarities than differences?
I can’t speak for Woody but I feel that Woody’s approach is simpler than mine. He believes that if you follow the Agile Manifesto properly then the need for estimates dissipates.
I agree with him in principle but see systemic issues, particularly in analytic/mechanistic organisations, that I feel need to be addressed in order for #NoEstimates to strike a chord with more traditional managers and executives. At its core though, #NoEstimates is about exploring various approaches to delivering software without the use of estimates, and the commonality between our approaches seems to be the continuous delivery of small increments of high quality, valuable software.
4. Do you think any team can work without estimates? What’s the minimum “barrier to entry” ?
Any team (with the right coaching and knowledge) can embrace the slicing of work, limiting of WIP and measurement of throughput/cycle times, even if they are being asked to estimate with story points or time. #NoEstimates is not about refusing to estimate.
If you’re talking more about the overall approach from the portfolio level down, I’d say there is a minimum barrier to entry:
- Fixed team (cost)
- Allowance of variable (emerging) requirements/scope
- Small batches of stories/features
- Slicing heuristic to create roughly consistent actual/measured work unit size (“a few days”)
- Incremental & iterative delivery capability of shippable code
- Mini constraints such as weekly demo/review with customer (small, early and frequent releases)
This looks very much like any typical “Agile” team to me
5. What advantages does working without estimates provide your team over, say, a team that is using longer cadences, eg. Scrum?
My approach is entirely compatible with Scrum. In some ways I think that it’s what Scrum was intended to be (or at least, in my opinion, should be).
If a Scrum team is working in 2-week Sprints, truly iterating, delivering working software every Sprint, inspecting and adapting the product etc. then this looks very much like the approach I am advocating.
6. A common criticism of #NoEstimates is that when you slice off functionality to deliver (the “heuristic” approach) you are, in effect, estimating. Is this a correct interpretation? Why/why not?
Well arguably if you create a heuristic for creating “small” work then I can understand why it is interpreted that way. However, I don’t believe it is estimating. The point is to create simple and unambiguous story cards. The “smallness” is a by-product of doing this.
If we don’t get the smallness we’re looking for (after measuring the result) then we inspect and adapt the heuristic. At no point do we actually look at a card and say “I estimate that this is small”. We trust in the heuristic.
7. You’ve been a really vocal advocate for working without estimates, standing up to some tough questions from established agile practitioners. Why do you think this topic has so many people so roused?
Because the way software projects are typically governed is largely driven by estimates, so it touches almost everyone in the industry. It’s an established way of doing things so it is deemed controversial.
8. What would your advice be to a team considering working without estimates? What should their first steps be?
Don’t simply stop estimating. Try and get better at creating simple, unambiguous slices of functionality. Measure your throughput. Compare story count data with your story point data. Discover for yourselves if a #NoEstimates approach is right for you and a good fit for your organisational culture.
People need estimates. So they can predict how much software will cost and how long it will take.
People need umbrellas. So they don’t get wet when it rains.
Although, some people don’t need umbrellas. They have awesome waterproof jackets with hoods. They have solved the problem of “how do I stop getting wet?” with a different solution to the humble umbrella.
People need to know what time the trains are running so they can plan their trip to work. Some people do not need to know this because they take the London Underground, where trains typically arrive every 2 or 3 minutes.
What’s your point, Neil, you might be asking? My point is that when people are debating against the #NoEstimates movement, they always seem to gravitate toward the same two arguments:
- People need estimates, so we should provide them
- We cannot simply start building software without having an idea how long it will take or how much it will cost
To the first point, people only need estimates if we determine that the only solution to the problem of wanting to know “how long and how much” is to make a guess. People who have found other solutions to that problem do not need estimates.
I now wonder: just because the people who still need estimates have not discovered any alternative solutions, does that mean they need estimates or that they think they needthem? Or simply prefer to use them over other solutions?
People do not need umbrellas. They need a way to stay dry on a rainy day.
To the second point, I categorically want to put an end to the myth that #NoEstimates equates to #NoPrice or #NoDate. If you read my previous blog posts on the subject or read my tweets you will hopefully understand that my point is the absolute opposite. We DO need a price and/or a date. The only difference is how we arrive at those things.
With estimation, you guess one or both of them (and, in doing so, have a stab at scope too – otherwise what are you estimating?)
With #NoEstimates you set the price and/or date, either through experience and choice (for e.g. setting price/date for the kind of work you do regularly, with a fixed team and cost) or through a real budgetary or time constraint (e.g. “I’ve only got $100k, what can we build for that?” or “The Australian Open starts in 3 months so the Aus Open app needs to be ready to go live the day before”.)
You then incrementally and iteratively deliver, setting mini-constraints within the wider constraint that breed creativity, innovation and predictability of delivery, and have a flexible working and payment arrangement with the customer.
People need certainty about what they will get and how much they have to spend. Unfortunately there is no certainty in software design and development. However, I would argue that #NoEstimates gives greater certainty than estimating does.
When estimating a date or cost you are creating uncertainty around those things, because you are guessing. You are saying “we’ll deliver somewhere between here and here”. However, if your delivery date and/or cost is set by a real constraint, as advocated by the #NoEstimates approach, you have created certainty around those things.
Yes, you may decide to shift the date/cost as you get closer to the initial figures, or once the customer decides they are happy with what they have. You have been delivering frequently and learning about what you are building. You have been creating data, such as throughput and cycle times, and using heuristics and slicing to reduce work increment size, so informed decisions can be made along the way. But you will only go beyond those initial figures if the emergent value of what has been built, and other data you have gathered, suggests that you should. Scope remains uncertain whether you estimate or not.
People still need 500-page business requirement documents. People still need separate test teams and development teams. But there are alternative solutions which may render these needs unnecessary. The alternatives to estimation are real, both at the project and the portfolio level, and are being used by many people across the globe in varying sized businesses.
All I ask is that we consider those alternatives and do not stop searching due to need.
It is no secret to my Twitter followers, and perhaps beyond the Twitter-sphere, that I am on a crusade of sorts to get people considering other ways besides estimating when it comes to costing software development projects and tasks. Such a view remains controversial, even among Agile practitioners. People argue that there is no alternative; customers want estimates, so we must provide. Stakeholders need to know when things will get done. Estimation is seemingly one of the few remaining immutable practices hanging over from the Waterfall era.
One of the common criticisms of my view is that it is unduly dismissive. When asked by our boss or a customer for an estimate, we can’t simply palm them off and say “I don’t estimate! Talk to the hand, sir!”
Of course this is true. But I should point out that I actually see nothing wrong with being asked for an estimate of how long something will take. What I object to is being asked to carry out (or ask my team to carry out) estimation rituals whose results will then be used for making important business decisions.
We cannot palm people off, but what we can do is offer alternative, empirical approaches to traditional and “Agile” forms of estimating, explain exactly how we will provide the required information and why such approaches offer advantages over guessing “how long” or “how big”.
First off, I would suggest that there are many problems with the “how long/big” approach, the biggest of which is that such an estimate does not take into account the:
- Inherent unpredictability of building software
- Current work in progress (i.e. the team/dev may not be able to start the work “now”, or even for a few days, weeks or longer)
- Capacity to do the work (i.e. the team/dev may make the estimate based on certain assumptions of team size which turn out to be false, or a colleague being there who ends up not being), nor
- Any upcoming changes in priorities (i.e. something may jump above the piece of work in priority).
From a task point of view, what is estimated as a “10 minute job” may end up taking a day or longer due to one or more of the above. I’m sure you have seen this situation many times over. From a project point of view, this situation is magnified and can be hugely costly, even catastrophically so. 3 month projects become 6 months. 1 year projects become 3 years.
In a situation where there are small tasks flowing through from the customer to the development team that are unpredictable in their timing (e.g. BAU work queues, feature development, etc.), a far better, probabilistic approach to get some semblance of predictability is to do the following:
- Measure actual lead times of every piece of work and plot them in a Lead Time Distribution graph
- Measure throughput (you can start by simply counting the number of cards in the “done” column at the end of every week)
- Use a fixed WIP limit on cards in progress (start, if you like, with the natural limit of team size)
- You can now use Little’s Law to calculate average lead time for a card at position n in the queue, i.e. (WIP + n) / throughput:
- e.g. Number of cards done in 1 week = 20, therefore throughput = 4 cards/day
- Team size = 2, therefore WIP = 2
- Lead time = (2+1)/4 = 0.75 days (i.e. on average it will take three quarters of a day for a card at the top of the queue to be delivered)
With the same formula you can predict where a card 2nd, 3rd or xth in the queue will get done, which is very helpful for guiding your prioritisation:
e.g. Using the same example above, a card 2nd in the queue will likely be done in 4/4 = 1 day, while a card 6th in the queue will likely be done in 8/4 = 2 days
Bear in mind the only way this formula can provide useful numbers is by having a WIP limit that is fixed (as far as possible). There will of course be variability in how long each card takes, but the law of large numbers will even this out to an acceptable average and it’s certainly far more scientific than asking people to estimate each card.
Note that if you use Scrum, and thus the team breaks down features into small tasks just-in-time at the beginning of every Sprint, you can use the same principles as above to determine when a new feature might be delivered (Scrum has a WIP limit over the Sprint length of the number of tasks in the Sprint Backlog, throughput is the number of “done” stories/tasks divided by the Sprint length, etc.).
Over time you can achieve a higher level of confidence with the predictions as you start to identify and split out different work types, determine probability of delivery times using your Lead Time Distribution graph, etc.
What about “how long will this project take?” !! Warning !! You can scale this approach up to the portfolio level. But… do bear in mind that building an entire software product rarely has a finite end point or a repeatable result because it is not possible (nor desirable) to define all of the scope required to deliver a delightful, valuable outcome. Use such predictions with extreme caution. There is no substitute in software product development for creating certainty around costs and delivery times via fixed agile teams delivering working software early and often, short feedback loops with the customer, etc.
So, next time you’re asked “how long” or “how big” about a software project or task, don’t palm off your boss or your customer with simply “I don’t estimate!”. Perhaps you might consider answering: “I don’t estimate! But… here is how we can save ourselves the cost of estimation meetings and make empirical predictions going forward to answer these questions with more confidence.”
This is the second in a series of blogs about why I believe we should not be estimating software projects. The first post talked about estimating at the team level, whereas here I talk about the contractual level and how to arrive at more Agile, iterative working arrangements.
Agile team, same old contract
Traditional software contracts, particularly with external parties, are based on:
- Establishment of scope
- Estimated time to deliver that scope
- A price derived from that time + associated costs + profit margin
Many, if not most, of today’s software contracts are based on similar premises, even in supposedly “Agile” projects. In order to mitigate the risk of their deliverable running late and bumping up the cost, many customers demand fixed price contracts. Others demand that the supplier contractually fixes the delivery date to ensure meeting some obligation around the date and shy away from time-and-material engagements. Suppliers often like the fixed time approach as well because it creates predictability around cost. Fixed price contracts provide certainty around the project’s ROI, assuming it can be delivered at a low enough cost, and customers like to know how much they are spending.
There is nothing inherently wrong with any of these approaches or the reasons behind doing them. The problem lies in how we arrive at delivery dates and prices. In order for a contractual engagement between a supplier and customer to be worthwhile to the supplier it must deliver a positive return on investment. Usually this means that the money received from the customer for the supply of the product or service must exceed the money spent by the supplier providing it. So how do we balance that equation? Customers want certainty they will get what they want in the agreed timeframe and/or for the agreed price, while suppliers want to make sure they make a profit on the engagement. Seems simple enough. But what is missing from these scenarios? Even if both parties accept the well-understood iron triangle of time/cost, scope and quality, and that at least one of the three must be variable, is this enough on which to base a low risk and mutually valuable contract? I believe the answer is no, and not just because scope needs to be movable.
Quality is variable, not fixed
What?! Sounds controversial but I believe it to be true. In addition to the need for scope being variable, Agile folk also tend to talk about quality being fixed and uncompromising, meaning that time and cost can also be variable to deliver the best possible outcomes. Aside from the fact that leaving the cost and/or completion time of a project open is generally deemed an unacceptable way to conduct business, and likely why many businesses shy away from “Agile” contracts or working arrangements, I actually think it is un-Agile to fix quality. By this I’m not talking about code quality (the debate about what are bugs and acceptable levels of bugs in minimum viable and evolving products is for another blog post, another day). I mean quality in terms of what the customer defines as quality, and for me they are the only ones qualified to do so. IMO quality is an ever-changing variable in a project, just like scope. The difference is that the customer defines quality, either explicitly or implicitly, consciously or unconsciously. Scope, however, is defined by the supplier. Personally I think of quality in the context of products and services as:
“A subjective meeting of a need or requirement to the satisfaction or delight of the customer.”
If it is fair to say that what might delight a particular customer one day might not do so in 6 months time, and that what delights that customer right now may horrify another customer right now, I believe it is also fair to posit that quality ought not be fixed. I believe quality is what we should try and achieve, and it is what the customers want, but cannot fix what it means to achieve it. We will fail if we concentrate on time/cost and/or scope without making sure we are adjusting our delivery behaviour to suit the customer’s perception of quality. When we talk about projects being either “on track” or “off track” we always base it on our own interpretation of whether we are meeting the customer requirements. I believe theonly way we can know if we are on or off track is by asking the customer. They are the ones who know what they want. And this will most likely change. And this is fine! Great, in fact! That’s why we’re being Agile, and why they signed an Agile contract, right?
Don’t deliver the requirements, deliver what the customer wants
Agile contracts – the reality
So what really is an Agile contract?
Fixed price contracts are fine. Fixed time contracts are fine. But here are the caveats:
- Do not fix time based on an estimate of cost because that inherently means you are agreeing to up-front scope detail that will likely bite you on the arse later and restrict the customer’s ability to request changes (and yours to welcome them) for their competitive advantage
- If the customer does not fully understand and embrace the inherent unpredictable, creative and innovative nature of quality software solutions then work with them at your peril
- If you don’t want to turn away work so you try and agree scope with the customer because “they insist”, and then base dates and times on estimates, do not pretend this is an Agile contract and make sure all parties understand the implications of this
- Know your costs by having a fixed team and determine a “final” delivery date, or allow the customer to determine it
- If the delivery date is acceptable to both supplier and customer then you now have a certain delivery date, no guesswork required; if the customer wants delivery sooner, reduce the price AND the expectation of quality
- When you purchase something more cheaply outside of software, e.g. a cheap old banger of a car, you can assume you will likely receive a lower level of quality – why is software any different?
- Negotiate a flexible, iterative, drip-funded contract that allows the customer to retreat early (either because they’re already happy with their product or because they’re not happy with the progress; if it’s the latter learn from their feedback, improve and move on)
- The aim is to delight the customer and make a profit so do not simply do what they ask you to do; they are buying your expertise and guidance for meeting their need, so don’t take this responsibility lightly and think you’re serving the customer simply by “delivering customer requirements”
- Deliver early and often (duh!); iterate, don’t just increment, and make this part of the working agreement
- If possible give the customer a sense of the kind of outcome they can expect for varying price and/or delivery times (based on previous work done by your company) and given them options to “upgrade” or “downgrade”
Remember we’re supposed to “welcome” change?
Yes, don’t try and fix scope. But be prepared to move around on quality also. Allow the customer to accept an earlier version of your product because it does the job and they’re delighted they don’t need to spend any more cash on achieving their desired outcome. Or to love their product so much that they now want to spend more enhancing it. This is variable quality, in my book. Variable scope refers to the cost-side of building software; the amount of work we need to do to reach a specified outcome. Variable quality refers to the value the customer feels they are getting. It’s subjective, dependent on the customer and their particular circumstances. Delivering high value outcomes to the customer may cost more than lower value outcomes or they may not, depending on what the customer feels about the iterative outcomes. That “old banger” that you bought for $1000 may actually provide very high value and quality to you personally. Or it may be housing a classic engine that you didn’t previously know about, giving it emergent value. To someone else it’s a worthless piece of junk.
In the same way software solutions, products and services are entirely subjective in their quality. Some people think Microsoft Word is awesome and feature-packed and they base their entire business operations around it. Some think it is terrible, buggy and doesn’t do anything they want it to do. Let’s not pretend that delivering “quality” software is a predictable outcome any more than fixed scope is.
Variable quality pertains to the wonderful opportunities we ought to have with Agile software development for correcting the course and building the right thing; truly welcoming and embracing change for the customer’s (and our) benefit. This is what Agile contracts should be about IMO. Remove the uncertainty of time and cost by making them certain, and celebrate with your customers or suppliers the uncertainty around exactly what will be built. Why not consider basing your contracts on a mantra more along the lines of:
“We guarantee we will work with our customers’ time and budget constraints to iteratively build and evolve a delightful outcome to an agreed level of expectation?”
This is the first in a series of essays exploring the huge topic of estimation within software development projects.
There are many different contexts in which estimates are given, and I am going to try and cover off as many as I can think of in these blogs, but the pattern of my argument will remain consistent: I believe we ought not make decisions in software projects based on estimates and that there are better alternatives for both the suppliers of software products (financially and ethically) and their customers (internal and external). Many of these alternatives are being used in real companies delivering to real customers with great effect.
Given the vastness of the topic, this post focuses purely on the scenario of one Scrum (or other method of iterative product development) team delivering a software product without estimating. Issues of scaling up or down capacity (adding or removing teams) will be covered in a later post about estimating at the portfolio level.
Will we deliver on time?
This is a question that often gets asked of a software development team at the beginning and throughout a project, and is a key reason why many believe we need to estimate. However, the ironic twist of seeking predictability by making predictions based on guesses is not lost on most people. We all know, or at least suspect, that we’re plucking numbers out of thin air. That we don’t yet know or understand the solution. Or the domain. We comfort ourselves by calling our guesses “educated” or “quick and dirty”, to justify our using them to make important business decisions.
Building software is by its very nature unpredictable and unrepetitive. While building software we cannot easily break down the work into same-sized, repeatable widgets like we can when manufacturing car parts. Unlike car production, the exact product we are building is unknown until we’ve built it, so how can we break the work down into smaller parts up front? One increment of software is not like the next. Software development is a creative, variable pursuit, and solutions are often revealed as we go along. For this reason, fixing scope in software projects is not really possible. Even if it were, it is becoming widely accepted that attempting to do so is undesirable because such an approach does not allow for (or, at least, does not embrace) emergent design, requirements, change and innovation. If we accept that scope is always variable, we must also accept that the delivery date may end up as a moving goalpost while we scamper to deliver what we think is fixed scope “on time” and “on budget”.
So, if it is true to say the concepts of “on time” and “on budget” are usually based on an estimate of how long it will take (and how much it will cost) to build software to meet a fixed set of requirements, rather than a concrete time or budget constraint, it is likely fair to say that we may take longer to deliver the software than we initially estimated. Yes, we may also be quicker than we thought. Or we may get our estimate just right. But, regardless of the outcome, does it actually matter how “correct” our estimates were? Does the act of estimating our work have any impact at all, positive or negative, on the delivery of great software or its return on investment?
Vision is key
To build software we need a clear vision and shared purpose of what success looks like. When commencing with a potentially valuable software initiative we need well understood high level goals, not the detail of how we will achieve those goals. In true iterative fashion we can then align our just-in-time decisions about how we will improve the product in the next iteration (i.e. what we will build next, aka top items in the Product Backlog) with these goals. I posit that trying to estimate how long it will take to deliver software to achieve one or more high level goals, and then basing real decisions on this estimate, is a questionable approach. Don’t we want our solution and architecture to emerge? Don’t we we want to welcome and embrace changes for the customer’s competitive advantage as the product evolves and becomes more real to the users? These are key principles in the Agile Manifesto and I believe they lie at the heart of a truly Agile approach to building software.
Remove the unknowns
Instead of depending on an accurate estimate for predictability we can take away the unknowns of cost and delivery date by making them… well, known. The Product Owner can fix the delivery date based on a concrete budgetary and/or time constraint (e.g. 3 days before the Australian Open starts for the Australian Open app is a concrete time constraint, and “we have to build something for $30,000″ is a concrete budgetary constraint). Within that constraint the team can then fix incremental delivery dates (e.g. end of every Sprint) to allow focused effort on iterative product evolution (it’s not good to have priorities changing every day on a whim) andprovide the opportunity to deliver early and/or under budget. This approach is also useful where there is no concrete budget or delivery date, although the need for interim release dates diminishes if the team (and organisation) is mature enough to have a continuous delivery model.
Estimating sprint velocity is waste
Rather than fix the solution up front (which is required in order to give a “how long” estimate), or make forecasts every Sprint about how many points or stories will get done, I believe teams ought to commit at the outset to building and delivering the best possible product by a given date and/or for a given amount of money. For me, release planning using, e.g velocity (“how many points can we deliver by the release date?”, or “what is our release date given our remaining scope and velocity”) is contrary to an iterative approach (holistic, evolutionary improvement of the product) and is more in line with a purely incremental approach (delivering a pre-defined Product Backlog feature by feature).
When we estimate and use velocity as a planning tool we are making an assumption of how much can get done in a time period. For that information to be useful and meaningful we need to have an amount of stuff in mind that we want to deliver (i.e. a fully estimated Product Backlog). I don’t think it would be too controversial to suggest that all the time (and therefore $$$) spent on estimating backlog items that do not end up getting delivered is waste (at least in the Lean sense).
But what about all the time and $$$ spent on estimating backlog items that do get delivered? To answer that question, I will ask one more question: “Did the PO ever prioritise one story over another based on it having a lower estimated cost (story point size)?” If the answer to this question is “No” then I conclude that all estimating in this context was waste because no decision was made based on the estimates that were given (instead the PO simply prioritised the highest value stories). If, however, the answer is “Yes” then estimates controlled what I believe should be value-based decisions. Estimating a backlog up-front and then release planning using velocity is a cost-based approach. While costs are obviously important in running a software project and, indeed, a business, if decisions are made purely on cost then some of the great software we use and rely upon today (e.g. much of what is made by Google, Facebook, Apple, Yahoo, Spotify, etc.) would never have been built and we would have one explanation as to why there is so much crap, expensive, bloated software in the world.
Iterate, don’t estimate
I believe iterative (Agile) development is 100% about making decisions based on customer and/or business value, using empiricism over guesswork and fixing cost by having a fixed team (a la the Spotify “squad” model) with known timeframes (frequent, predictable release dates as opposed to “deadlines”, which are release dates for “fixed” scope based on imaginary constraints). Knowing our costs and delivery dates gives us certainty which allows us to embrace the delicious uncertainty of building great software.
btw – Having a fixed delivery date doesn’t mean that we will necessarily stop building our product on the delivery date. We may have already stopped or we may choose to continue. What it does mean is that we will continually make go/no-go decisions based on the emergent or potential value of what we are building rather than estimating the cost of a particular solution.
Shift focus to “small”
From the team’s point of view, I believe it is far more valuable to get better at breaking down stories JIT (and only JIT – any earlier is potentially wasteful) to be as small as possible (or, at least, as is practically possible) than to “increase velocity”. For me, a high-performing team has the ability to deliver frequent ”done” increments to the product that can derive immediate feedback and/or potential value for those using it. Clearly the smaller the increments the more frequently delivery can happen, which leads to shorter feedback loops and increased learning and flexibility for the PO to prioritise emergent features over features she originally thought she wanted/needed that have diminished in value, or even take a complete change in direction. This, in my opinion, is far more in tune with true business agility.
The importance of how many stories or points gets delivered in a Sprint becomes truly insignificant when the team is delivering frequent changes to the product and putting them in the hands of users. This, for me, is the crux of why software projects are trying to embrace an Agile approach. But until the estimation stops I believe we’re being held back from true high performance which can deliver awesome outcomes for customers.
- Splitting user stories by the quality of solution – Neil Killick
- What price estimation? – Neil Killick
- Should we estimate software projects… at all? – Neil Killick
- If you found estimates bring no value – what would you do? – Woody Zuill
- A thing I can estimate – Woody Zuill
- Can we code without estimates? – Woody Zuill
- Story points considered harmful or why? – Vasco Duarte
- Stop using story points – Joshua Kerievsky
I love this approach to splitting up user story value by considering vertical slices through the technical solution.
Iterative and incremental development is a tricky art to master. Delivering very small increments of value takes some practice. With iterative development we must be happy to frequently revisit areas of the system that we are building as we learn more about them, which is quite different from the traditional approach (broad and shallow engineering versus narrow and deep).
This is where I believe the Agile Manifesto authors were coming from when they spoke about “Simplicity, the art of maximising the amount of work not done“. Implementing the simplest technical solution in order to deliver value quickly. It does not necessarily constitute the final solution, and it certainly does not mean “quick and dirty”. We still need code quality (unit/integration/acceptance tests), and the goal is to have a usable system. Something we ourselves would be happy to use and would be able to provide feedback on.
But for an individual user story we are simply trying to meet the goal of that story in the quickest and simplest way possible while providing an acceptable technical solution to meet that purpose. If the code is simple and maintainable we can easily build upon it if required, and the required architecture will evolve as we both iterate and increment.
So we want stories as small as possible (no more than a couple of days of work) and with the simplest acceptable solution under the covers. A good way of looking at it is “what’s the minimum amount of code I need to write to pass the acceptance tests?” (this approach of course leads naturally into the worlds of TDD and BDD, which I encourage you to read more about).
Working this way enables us to get early feedback on the feature and decide whether to invest more effort (via more stories) for that feature, thus allowing the flexibility for the product owner to prioritise a different area of the system if (s)he so wishes.
Some further reading about splitting user stories:
Have a great weekend everyone. Perhaps consider making the goal of your Sprint Planning meeting on Monday to split your stories down even smaller using some of the excellent techniques available. The benefits are numerous.
So you want your team to deliver software faster?
To demonstrate why this request is nonsensical first imagine a mature, high performing Agile team who delivers on average 10 stories of roughly the same size in every 2-week Sprint (i.e. 1 story per working day).
Now imagine we asked the team to take on just ONE story every Sprint. Their capacity is 10 stories, but we ask them to only deliver 1. What might happen?
Well, we can’t be sure but it is fairly safe to assume that the 1 story is guaranteed to be delivered. We can also be pretty sure that it will be of an extremely high quality, given that the team are working well under capacity and so have plenty of time to dedicate to ensuring a bug-free and pleasant user experience. They may also spend extra time on exploratory testing, ensuring that the whole product, of which this story is a small part, is not hiding some ugly buggy behaviour. If they do find some bugs, they may fix them and add some tests to their regression suite to ensure the bugs don’t recur, increasing the holistic quality and maintainability of the system.
Given that the team knows they are an awesome, high performing team and they have plenty of time to spare in the Sprint, they will likely spend a large portion of their time not working at all. Having fun. Slacking off a little. Giving their brains time to breathe, to reset. Enhancing their team culture and spirit.
From a planning point of view, we may not have speed but we sure have predictability. We know that the team delivers 1 story every Sprint so we can very easily figure out when our product will be delivered with close to (if not exactly) 100% confidence.
OK, now let’s instead imagine we ask the team to deliver 2 stories per Sprint. It’s not too much of a stretch to assume we would get a similar result to the above, except this time some (albeit small) sacrifices will be made. Perhaps some of the extra, luxury activities will be left out. Perhaps all of the aforementioned activities will be done but with less time spent on them. So a little less story and product quality. A little less fun and recuperation time. A little less team building. While it’s highly likely that the team will surely deliver the 2 stories, the probability is slightly less than when we asked them to deliver 1 story. So we have a little less predictability.
What about if we extend this scenario to 5 stories? Then 8? Now imagine we’re struggling to hit a contractual deadline so we feel the need to “speed up”. So we ask the team that predictably delivers 10 stories to deliver 12 (now we’re over capacity). Or even 14?
Hopefully you can see where I’m going with this. The more stories we ask the team to deliver, the less time they can spend on quality, the more likely shortcuts will be taken, the more likely technical debt will be incurred, the more likely team culture and effectiveness will suffer, the less fun will be had, the more fried the team’s brains will be and the less predictable we will become at delivering software.
Read that again – the “faster” we ask (or worse, tell) our teams to go, the less predictable at delivering software we become, and that software is more likely to be of a lower quality. Allowing our teams to deliver at a constant, sustainable pace ensures quality, predictable software delivery, a higher chance of happy teams and happy customers, which leads to higher business value (e.g. profit).
In short, by allowing the team to find the right balance and deliver high quality software at their capacity, a cycle of success is created.
So, managers, please think twice before asking your teams to speed up, i.e. deliver more stories (or story points) than usual in a Sprint or sequence of Sprints. It’s like asking a marathon runner to start running faster after 32k for the final 10k – you’re increasing the chances of long term failure (not completing the marathon at all due to fatigue) for a potential short term gain (running some quicker kilometers).
If I want someone to, say, build me a website, in most cases there are two possible constraints I have. I either have a maximum amount I want (or have available) to spend, or I need my website delivered by a particular date. In a truly Agile project, both of these are the same for the supplier because there is a fixed team, i.e. time constraint = budgetary constraint.
Back to my requirements. Let’s say I have $5000 available. If I engage a web design company, I can choose to not tell them my constraint, perhaps because I want to save money and get the “best/cheapest quote”. I can simply ask “how much will my website cost, given that I want x, y and z?”
This is the predicament many software companies have – how do we determine a price for the customer? The answer is invariably to take the customer’s requirements, devise a solution and estimate how long that solution will take. This will then derive the cost to the company, which will determine the price to the customer.
As customers, let’s stop and think about this. Is this the approach I want the web design company to take? Does this provide the best possible value for me? When I engage the web company, would I rather the following:
A: Stay shy about my $5000 budget, and the company comes back and tells me they can build my site for $4500, having based that decision on a fixed design/solution and guess of how long that design will take to build. Perhaps they’ve actually shaved time from the team estimates in order to under-cut a competitor. Perhaps they’ve added on time as a “buffer”, increasing the price for me. We will sign a contract based on a SoW detailing what I will get for my money. If I want to change any of the detail as I start to see the website built I will need to pay extra or I will need to drop out some of the originally agreed features. These small increments will need to be costed accordingly, again based on a guess of how long the new feature will take compared to the original feature.
B: Reveal my budget. They come back and say that my $5000 buys 5 weeks of work, and the team will build the best possible website they can for that price. They might show me examples of other clients’ websites that cost around $5000 to give me an idea of the quality my website will be. They will work with me in weekly iterations to ensure I’m happy with the progress, can change things as we go along and that the key things that are important to me are always being built first. They will deploy my site to a demo URL daily so I can see the site evolve and provide feedback at any time. If after a week, or two weeks, or 3 weeks, I’m not happy with what is being produced I can choose to end the relationship. This makes it clear to me that the web company is absorbing much of my risk and they are very confident they will do a great job for me. I as the customer am the one gauging the progress against my requirements rather than them estimating that they are “on track”. They want to form a working relationship with me in order to build the right thing, and that they might get my repeat business. That I might recommend them to my friends and colleagues. Their mantra is to delight their customers.
Option A requires estimation (guessing/risk/uncertainty), upfront design and makes change hard. Option B requires no estimation, design can change and emerge as we go along, embraces changes as I see the site evolve and shows a company wanting to work closely with me to achieve a result I am delighted with. One that is prepared to front extra risk (of losing money on the contract) because they are so confident in the quality of work they do and of the relationships they form with their customers.
I know which I’d choose. How about you?
I find it curious when people criticise Scrum as if it is competing with Kanban. I don’t believe it is, and I don’t believe it is particularly worthwhile debating Scrum vs Kanban as two Agile methods because that’s not really the case. Kanban and Scrum have quite different purposes (although they do perhaps have similar intentions).
Put simply, the purpose of Kanban is to create a kaizen culture, one whose primary concern is that of learning, improvement and process evolution using “the scientific method”. Conversely, despite Scrum having lofty yet admirable aims of “changing the world of work”, the purpose of Scrum is to enable teams to develop products effectively. Scrum is generally a bottom-up, team-based approach and so, as the Kanban brigade rightly point out, it is not particularly (if at all) effective at instilling a kaizen culture (fortnightly team retrospectives, even done well, do not create a culture of continuous improvement in an organisation). It’s also not great as an enterprise solution to perceived effectiveness problems unless the organisation really understands the cultural implications of moving to Scrum across the board and has a collective mindset that can buy-in and adapt.
But here’s the rub. To me it’s not about whether an organisation should choose Scrum or Kanban – both are frameworks or methods for different contexts and different intended outcomes. Many companies have identified that they are crap at delivering software and want to get better at it. Rightly or wrongly, these companies are not seeking a kaizen culture. They simply want to deliver software better (by their terms), not improve their effectiveness overall. I am not saying this is a good thing but at least by choosing Scrum to (try and) improve their software delivery it might just get them thinking about the importance of learning and improvement to overall organisational effectiveness. I know from personal experience of coaching new Scrum teams (imposed or not) that they begin to get curious about Scrum and Agile, and then the curiosity spreads to Lean and Kanban. A good coach will introduce teams and their managers to Lean and Kanban concepts and techniques within Scrum (or evolving away from it as the team grows in confidence) as part of a drive for true self-management, measuring, learning and improvement. I have seen, and been part of, many Scrum-ban implementations. They may not have changed their companies for the better as a whole but they certainly helped those companies deliver software better, which is what Scrum ultimately is intended for.
As for the argument about Scrum prescribing roles, meetings and processes, I believe this is down to mindset. If rather than describing the Scrum framework by what it “prescribes” (I prefer the word “recommends” but I will continue to use the word “prescribes” because I see no harm in prescribing something within a framework that one chooses to use) we instead describe it by what it intends, Scrum is a framework for enabling teams to iterate over a product until the business or customer deems it valuable enough to ship. So, if you’re in a position where you want to develop a product iteratively (or at least incrementally) and want to put a team together to do that, Scrum is (potentially) an excellent choice. If you were to choose just Kanban for developing a product, which of course you could, then by default you will not be changing anything about the way you currently work. This is not necessarily a good thing.
For example, Kanban does not prescribe iterations but often Kanban implementations use some kind of iterative process (even if it’s just having a fortnightly review of the product) and teams do this for good reason. Sure, having iterations (Sprints in Scrum) doesn’t guarantee an iterative and incremental approach to building the product but it at least hints it might be a good idea. Even if you don’t fix your scope within the timebox it still makes sense to have (say) fortnightly demos and a chance for everyone to review and evaluate the product holistically. This is a sound and effective approach to software delivery, as borne out by the Agile Manifesto’s recommendation of measuring progress via working software and delivering value early and often.
Similarly, Kanban doesn’t prescribe cross-functional teams, so if you happen to have silos of developers, testers, designers, etc. working in a Waterfall fashion with hand-offs then you will continue to work in that way and not reap the benefits (at least early in the game) of forging collaborative relationships and working as a cross-functional team until such time as the kaizen to try this is agreed upon. This approach may be better in the long run in terms of organisational effectiveness, but in the short term it could be a slow path – too slow for the business to accept – to delivering shippable increments early and often and measuring progress with working software.
Being a framework Scrum prescribes meetings and roles, but without them there is no guidance toward effective delivery of value early and often or the aim of breaking down complex problems by building an end-to-end shippable product in increments as a team – in other words, if you take these meetings and roles away it’s not really a framework is it?! The meetings point out the importance of continuous business/customer feedback, prioritisation and trade offs (as does the Product Backlog), just-in-time planning, correcting your course, team process improvements etc. The roles point out that there is conflict in the traditional Project Manager role between serving the team and serving the business, and that an iterative (Agile) approach to software development requires coaching at both the team and business level, hence the Scrum Master and Product Owner roles.
A product development framework without some semblance of structure renders it useless as a framework. If the framework is abused (as it often is, but this is not the fault of Scrum) then its effectiveness will be diminished or negated completely. But this does not mean that Kanban is better than Scrum for product development or that Scrum should not be used. In the right context and with the right mindset, Scrum can be extremely effective.
To be honest it all depends on context (as it always does) but, put simply, if an organisation wants to change in terms of improving software delivery, Scrum may well be more effective than Kanban. If an organisation recognises that it needs to embrace a kaizen culture, not just to be better at shipping software, then pure Kanban could be the way to go. But trashing Scrum because it is not always good as an enterprise solution (ironically it can be but doesn’t prescribe how to do this) or because it defines structure (which guides towards effective practices congruent with Agile) seems glib to me.
Scrum and Kanban are different approaches for different contexts but can work beautifully together in certain situations (generally product development in a team and company of the right mindset to be open to new collaborative, approaches to delivering value). One can evolve into the other, either way. They are both interesting and have noble principles. There is much to learn, and teach, in both.
What’s wrong with the Product Backlog?
Many companies and teams are using the idea of backlogs to help them evolve, visualise and order their portfolio of work. In terms of the work required to bring a particular product to fruition, the Product Backlog is often used in conjunction with an iterative development approach as an alternative to documenting a fixed set of requirements and a solution before development work is started.
However, the Product Backlog concept niggles me quite a bit and has actually proven in my experience to be a poisoned chalice in some respects. I actually now believe that constantly adding, removing and tailoring requirements (or stories, use cases, whatever) on the Product Backlog is (especially in the wrong hands) a fairly ineffective and costly approach to building software.
There are several reasons why I believe this to be so:
- It thwarts innovation
- It compromises the holistic vision of the product
- It creates a “requirements black hole”
- It causes a maintenance overhead (cost, inefficiency)
- Large queue = high cycle times
- It makes it difficult for the PO to understand dependencies
- It trivialises role of PO to one of ordering/prioritisation
A Product Backlog is supposed to be a list of things we might want in the product, ordered by value (value pertaining to importance, ROI or whatever the Product Owner deems to be worthy reasons to satisfy certain particular requirements as the next priority). However, what it often ends up becoming is a big long list of everything we (think we) need to build in the product. Aside from the fact it becomes increasingly difficult to maintain and make sense of this list, building the product becomes a ritual of ordering the backlog and the team building the top things from the backlog in iterations until the product is deemed ready to “go live”.
A problem with this approach is the same problem that one has when building a product based on up-front specification documents – it is not promoting innovation in the product’s evolution. If things are on the backlog then it seems a reasonable assumption that someone has put some thought and time into why that thing should be on the backlog, so there is a tendency (for the PO and team) to want to build the product “as is” and not upset the apple cart too much. In short, the backlog becomes nothing more than a list of up-front requirements which may as well be in a BRD.
A truly iterative approach to building software allows requirements, design and architectural improvements to emerge as we go along. This sometimes means scrapping the whole thing and starting again. If we simply “work from the backlog” we may not pay the necessary attention to determining how best to evolve the product and instead go for the easy option of simply churning out the stuff already on the backlog.
In Scrum, the Sprint Review is intended as a meeting to review the evolution of the product and how it should be taken forward in the next Sprint. Many companies instead have a “Showcase” to demonstrate what has been achieved in the last 2 weeks. This approach completely negates the importance of feedback and putting our heads together to determine the best bang for our buck over the next 2 weeks, i.e. “reviewing” the product.
Many companies plan 4, 5, 6 or more iterations in advance, lining up the “stories” to be done in those iterations and completely skip the innovation part.
Compromises holistic vision of product
For iterative development to work well we must continually evaluate the product as a whole, i.e. we must iterate and increment simultaneously. The Product Backlog does not promote this concept.
Again, there is a tendency when working with a list to just work through the list – to add purely incremental value rather than a holistic approach. This can lead to much re-work, delay and added cost, both from a product value and a technical/architectural point of view.
The idea with the Product Backlog is that we can easily add new requirements to it and re-order things so that if a new opportunity emerges while we’re building the product we can easily prioritise that opportunity and deliver the value fast. In reality what happens is that stakeholders ask for features and the PO adds them to the backlog to keep them happy. This (rightly or wrongly) sets expectations. And with expectations come a whole barrage of politics. The problem here is that the PO can give no guarantees whatsoever that the feature being asked for will ever be built, i.e. the goal posts are moving. Thus the backlog becomes a “requirements black hole”. Do not under-estimate the negative effects of this in terms of trust among colleagues and meeting your goals.
A stakeholder once said to me “when I’m told my request is on the backlog I immediately know it will never be built”. This is often a reality, so is there a better way?
Not only is the Product Backlog a potentially enormous list of stuff, it’s a list that needs to be constantly groomed, usually at least fortnightly, to ensure the highest value things are at the top. Whether you use a backlog management tool or index cards, this creates a significant maintenance overhead (inefficiency) for the PO and team (and potentially other stakeholders).
The backlog can quickly become the focus rather than the product itself, and as it continues to grow it becomes increasingly difficult to prioritise or focus on the highest value things to build.
Large queue = high cycle times
Every new requirement added to the Product Backlog increases the average cycle time to deliver functionality to the users. Having a large Product Backlog can adds weeks, months or (dare I say it) years to cycle times. Is it particularly “agile” to tell a stakeholder that it will take 6 months to deliver a piece of functionality that in effort terms is only a 2-week piece of work? This situation can arise if you let your backlog get out of hand. All the dead wood requirements sitting down the bottom that everyone has forgotten about (but is afraid to delete) are preventing you from being responsive to the market or attacking new high value opportunities.
This potentially means those features that could give you competitive edge in the market will be scrapped for being deemed to take too long to deliver.
A large Product Backlog inevitably creates dependencies among items. Innocently adding a requirement to the backlog can eventuate in a cascade of dependencies that can add months to a project. By glancing at the backlog, are these dependencies transparent? Generally, no. They are invisible to the naked eye and thus have far reaching implications for the PO when trying to effectively order the items on the backlog. It can be extremely frustrating for a PO when the highest value items – requirements that they have taken time, negotiation and effort to prioritise and move to the top of the list – move down the list because of technical or other dependencies.
Taking a more holistic approach to the product makes it easier to dissolve these dependencies.
Trivialises role of PO
The larger the Product Backlog, the more time the PO will need to spend ordering it. This means more prioritisation sessions, more cost-benefit analysis, more workshopping, more estimating in order to determine size for ROI purposes. Little wonder that Product Managers are reluctant to take on the Product Owner role.
The Product Backlog can potentially trivialise the PO role to one of ordering and prioritisation of work rather than concentrating on building the best possible product with which to penetrate the market or increase the value of the business.
So, what’s the alternative?
To my mind, and in my experience, the important things about a product rise to the surface if you are doing proper Just-In-Time planning. By using the Sprint Review and the Sprint Planning meetings properly, the team and PO can properly gauge the evolution of the product and what direction it needs to take next. Why is a Product Backlog required for this? If you can’t remember what needs to be done, it’s not important. If you can remember what needs to be done, you don’t need it on the backlog!
I have found an evolving product roadmap can much more effectively align stakeholder expectations with what’s actually being built. A roadmap is very clear, easily interpreted and gives interested parties the information they crave. In the Sprint Planning meetings, why not ask yourself “How should we take this product forward in the next 2 weeks, and what can we realistically achieve?”. This focuses everyone on what is achievable which helps with simplicity of design as well as focus on value. Then update the roadmap with the new or changed high level ideas emerging from this planning session, and the rough delivery timeframes. It is a mistake to just focus on the next increment of the product in each Sprint Planning meeting. Each iteration should be an opportunity to re-align everyone with the product vision and what the best approach for the next 2 weeks should be.
A Product Backlog done well should paint a picture of the product. It should tell the story of what you aim to achieve. You should be able to show the Product Backlog to someone completely uninvolved and they can gauge exactly what the purpose and vision of the product is. What innate user need it is meeting. The “why” of product development.
If your backlog is simply a long list of stuff that will most likely never be done, perhaps you can look at an alternative approach?
”Belief, a psychological state in which an individual holds a proposition or premise to be true”
I was pondering this morning about the difference between Learning, Believing and Knowing. The differences may seem obvious but I’d like to explore whether the following is true:
- Does learning lead to knowing or merely to believing?
- What constitutes knowing something?
- If a fact requires experience to confirm it, what if we have no experience of the subject of the fact?
We say things like “you learn something new every day!” but how much of the stuff that is absorbed into our brains on a daily basis is actually learning? Since I started using Twitter a couple of years ago I feel that I have learned very much from many people on many subjects. Similarly, as I read blogs, articles and books and talk to people I feel I am learning more and more. But what do we mean when we say we are learning? Do we mean that we are acquiring new facts (or believe we are) or are we merely merging what we are being told and what we have seen and read into our own opinions and views of what we know?
Does Peru exist?
This seems a silly question but I am using it to make an important distinction between knowledge and belief. Of course the answer to this should be a unanimous “yes”. But why am I so sure that Peru exists? I have never been there. I can’t remember talking to anyone who says they have been there. The reason I know it exists is that there is overwhelming evidence to its existence that I have observed. I have seen pictures (claiming to be) taken in Peru. I have seen video footage (supposedly) shot in Peru. I’ve seen (what I’m told is) Peru on satellite images of the Earth. It is a “fact”. Right?
“A fact (derived from the Latin factum, see below) is something that has really occurred or is actually the case. The usual test for a statement of fact is verifiability, that is whether it can be proven to correspond to experience.”
Hang on, so I can only verify that Peru’s existence is a fact if it has proven to correspond to experience? Well I have no experience of Peru, other than the pictures, video, etc. that I’ve seen, so until I’ve actually got on a plane and gone to Peru can I be absolutely 100% sure it exists? If I’m really pushed may my confidence level only be 99.9999999%? I’m relying on other people’s proof and experience for me to be so sure that Peru exists. Rather like we rely on scientific understanding of the world to establish facts that would be impossible for us individually to verify (like gravity) and reject information that is not established as fact (like the existence of a higher being, intelligent design, etc.).
I don’t remember the instant when I first heard there was a country called Peru. Let’s assume as a child I heard someone mention it and I asked my parents “What’s Peru?”, to which my Dad answered “It’s a country in South America”. Now, my question here is: at the point my Dad told me of Peru’s existence as a country in South America, did I learnthat Peru exists or did I simply begin to believe that Peru exists? I was a child so I was also told of Santa Claus and the Tooth Fairy’s existence. What made Peru’s existence more real to me?
Do I know anything?
To give a current, grown up example, I follow a gentleman on Twitter called Bob Marshall (@flowchainsensei) who, among his other achievements, created the Marshall Model of Organisational Evolution. In Bob’s words:
“Simply put, the Model explains how the effectiveness of any knowledge-work organisation is a direct function of the kind of mindset shared collectively by all the folks working in the organisation – managers, executives and employees, all.
effectiveness = f(mindset)”
Since I first learned of the Marshall Model’s existence (I observed it personally, and so can you with the link above, so can verify as a fact that the Marshall Model exists), I have read more about it, interacted with Bob on Twitter and blog posts and from all this have gleaned a genuine interest in organisational effectiveness (thanks Bob, if you’re reading this).
What’s also interesting to me though is how I have embraced the rightshifting concept to a point that I tell others about it. I now know not only about its existence but also what it tells us about organisations. Or do I? Bob came up with the model and so obviously believes, knows it to be a true reflection of organisational effectiveness. But when I read more and talked to Bob about it, did I learn more about the model or do I merely start believing more in the model? Do I now know that effectivness is a function of mindset, do I merely believe it or have I simply learned that someone else believes or knows it?
I have always felt in my career that there are certain types of organisation when it comes to culture and how they get things done, and certainly prosper more readily in, to use Bob’s model, the more rightshifted organisations. So is there a chance that when I saw the Marshall Model my cognitive bias leaned me towards the principles and helped me embrace it as observable and true? Or do I actually have evidence that the model is true and thus I have learned the model’s effects as fact?
My cognitive bias also leaned me towards Agile because the values and principles align with me as a human being. One might call this “mindset“. I coach Agile principles and practices and have observed certain behaviours causing certain results, some repeatedly. But all of my experiences and what I constitute as knowledge is all based on my own view of the work and the world. Without continued learning on everything I think I know about, even things I consider myself an “expert” in, I cannot be sure that I actually know enough, or will ever. For all I know, everyone else I encounter might think I’m a complete duffer when it comes to product development even though I think I’m quite good at it!
Learn to learn
We all use our knowledge every day in our work and our personal lives. I do think though that it’s very important to acknowledge that much of what we think we know may actually just be things we believe and have never actually verified them to be fact.
This is one of the many reasons why learning is the key word from the three used in the title of this post. We cannot know, even believe in something until we have learned about it. I learned about God as a child and started to believe in Him. I learned about Santa Claus and believed in Him too. But I never really knew that either existed. I certainly thought I knew (presents arrived on Christmas Day), but I didn’t. Unless we recognise that we must learn how to learn, then continue to learn daily, infinitely, we cannot purport to truly know anything.
What do you think you know?
After a year or two of “having a hunch” about this, and after many years of either estimating work or working to someone else’s estimates, I’ve now finally come to the conclusion that the use of estimation of any kind in a project is not only a waste of time but is actually destructive.
I am fully aware this is an extremely controversial statement, so I am going to be as thorough as I can in explaining how I came to this conclusion via experience, data and validation. Indeed, when I read Vasco Duarte’s post about this several months ago, I saw his “point” (no pun intended) but also argued the merits of using story point estimation for the purposes of:
- Up-front sizing of a project to determine its validity within a given budget or timeframe
- Increasing shared understanding and knowledge within the team based on the discussions that arise from a Planning Poker session
- Allowing the PO to make trade-off decisions between different sized stories (based on ROI)
- Measuring team velocity
- To continually validate the initial project sizing by predicting scope-fit within a given release date
- To allow the team to measure and improve its performance
Why shouldn’t we estimate?
I have since come to the conclusion that some of these things do not need to be done at all, and the other things can be done without the need for estimating (guesswork) of any kind. I would now additionally argue that even if you acknowledge the shortcomings of estimation and use ranges, account for uncertainty, etc., the act of estimation in itself is destructive for the following reasons:
- “Fixed” scope project delivery expectations are often (always?) based on an up-front estimate of scope (guess) and how long that scope will take to be delivered (another guess), leading to the obvious dysfunctions like death-marches, low quality, etc.
If the budget is fixed, there is no way of going “over budget” in order to deliver the fixed scope. Yet “over budget” is a common term used when describing failed projects. If your budget is truly a constraint then you will only deliver what can be delivered. Agile methods ensure that what you deliver is of the highest value to the business.
I chatted to a team member earlier and he complained of feeling pressure to increase velocity. I asked him where this pressure was coming from and he said that it stemmed from the concern that the project will fail if the team isn’t able to deliver more stories more quickly. No one is actually specifically asking the team to deliver more, but there is an implied pressure to do so because they are aware the budget is running out. This mindset comes from years of poorly funded, gated projects, death marches, focus on productivity rather than quality and canned or failed projects.
- Asking teams to estimate how long their work will take (or how many points they will deliver in a Sprint or a Release, same thing) has connotations that their output is being measured by an external party (manager), creating an environment of fear and massaging figures to reflect what is desired rather than what is predicted
To increase velocity the team simply needs to over-estimate stories to give the illusion of delivering more. They may not consciously do this but it may happen sub-consciously. The project manager pats them on the back, but all that has happened is the same amount of “done” working software has been delivered.
It’s time to get real and use real data to reflect real progress, whether it’s good news or bad.
- We shouldn’t be defining all our scope up front, meaning we shouldn’t estimate all our scope up front, meaning we shouldn’t be defining our delivery date based on our scope
We should be fixing our Release 1 delivery date and aiming to build the best possible product by that date (variable scope).
As soon as we introduce the word “estimation”, the default mindset is to consider “how long will this project take?” (if this isn’t asked explicitly). This causes us to consider the complete scope and duration of the project (this is anti-Agile and I won’t go into why it’s a bad idea because enough has been written about that already elsewhere)
How do we size a project?
Short answer – you shouldn’t. If you don’t have a firm deadline for your project (e.g. day 1 of the Grand Prix for a Grand Prix app), you will have a budget for your project (set by the PMO or the external customer), from which you can derive a deadline. The smart thing to do is to then plan an interim release (say at the halfway point) where you can gauge how the project is going based on the working software measure.
For example, if your budget gives you enough cash for ten 2-week Sprints (given a fixed, 100% allocated team), clearly you need to assume that your go-live date is in 20 weeks time. But the aim should be to get working software in a production environment in 2 weeks time (after Sprint 1). You should then iterate over the product, allowing requirements (scope) to emerge and shape the direction the product takes, and take time to reassess after Sprint 5.
These things are not predictable up front – estimation will set you up with a load of scope (expectations) that will not get delivered and will only create unnecessary analysis time (money) and pressure.
How does the team get shared understanding of a story?
Simple. When a new item is added to the top of the product backlog, the team will discuss it in Sprint Planning and break it down if necessary. If it doesn’t need breaking down then it is likely already well understood. If it does then the act of breaking it down will necessitate conversations around the implementation detail that will facilitate shared understanding.
In short, the team does not need to be in an estimation session to discuss and break down a story.
How can the PO make trade-off decisions?
The PO probably needs to know the ROI of a story when introducing it to the team to be delivered. In order to calculate the ROI she needs to know how much it will cost to be delivered (how long).
Here a team would estimate the item using story points and then the PO, armed with the team’s velocity, can estimate the item’s ROI. But without story points how can this be done?
This is where the concept of “implicit estimation” comes into play. In order to create predictability in the flow of work, the team will break down stories just-in-time (in Sprint Planning) so that they are all roughly the same size. This is something that happens naturally throughout the course of the project. Over time the size of stories normalises because the team naturally wants bite-size chunks to work on in the short time period of the Sprint. They get used to delivering a certain number of stories, give or take, in a Sprint.
So for the PO to cost the item, she just needs to ask the team if it is understood or needs breaking down. If the PO considers it high enough priority she will want to introduce it in Sprint Planning so that it gets built right away, if it makes sense to do so. Sprint Planning is the place for the team to break down the story if required and decide if it can be delivered in the Sprint. If it can, the cost of the item is essentially 2 weeks of team wages (assuming production deployment is done at the end of the Sprint – a continuous delivery model can improve speed to market and ROI, but that’s a discussion for another day).
If the item can’t be delivered in the Sprint, the PO can simply look at how many stories have been spawned from the epic item and determine the likelihood of it being delivered in the next Sprint or the Sprint after, based on how many stories the team usually gets through. This leads me nicely on to the topic of how we measure velocity in the absence of story points.
How do we measure velocity?
Now I’m moving firmly into Duarte territory. The answer is we count stories rather than accumulate story points, hence negating the need to estimate. As I mentioned before, teams break stories down into roughly the same size, so counting how many stories are delivered in each Sprint makes for a satisfactory measure of velocity. If the team usually delivers 5 stories with zero defects and then one Sprint delivers 6 or 7 stories with zero defects, an improvement has been made (disregarding variance, which exists whatever unit you use to measure velocity).
Due to the hunch I mentioned earlier, I have been tracking velocity as both story count and points for my current team and making projections using both methods. As I suspected (and as Duarte points out with much supporting data), story count provides just as good, if not better a measure of progress and predictability as story points do. Therefore why spend all the time, cost and effort on estimation sessions and velocity calculations?
While story count works great for velocity, I would still warn against using this or any other velocity measure as a way of predicting when you can deliver. You should know when you are delivering and only be predicting what you can deliver at that date. Don’t leave your delivery date to chance, even if you are using historical data rather than guesswork to predict how many stories can be done.
What you can do, however, is use velocity to help the PO understand scoping trade-offs in the backlog (“the data tells me the team can deliver 20 more stories before the release date, so I’ll make sure the most important 20 are at the top of the backlog“).
It’s taken me several years to come to this conclusion. But, if you think about it, people laugh and joke about estimates all the time. Everyone knows they’re a guess. Everyone knows they’re wrong. Yet we continue to do them. I believe it is time for us to acknowledge that it makes far more sense to eliminate the risk and cost of estimation completely and use only empirical data (as Agile and Scrum promotes) to make predicitions.
In a world without estimation overhead the team is likely to be more happy and productive, the inefficiency of spending time on estimating rather than delivering working software is eliminated and the PO will have real data with which to make decisions rather than guesses made under pressure.
- Don’t estimate your delivery date – base it on your budget or a firm deadline
- Don’t estimate your scope – allow it to emerge in order to reap the benefits of building products with agility
- Don’t explicitly estimate product backlog items (stories)
- Use historical data (story count) to predict scope delivery on a given date
- Use just-in-time implicit estimation (story breakdown in Sprint Planning) and past data to estimate cost (ROI) of story delivery
I don’t like to guess, but I predict that not estimating your projects will make success far more probable
I’ve just watched a presentation that’s made me so angry it’s prompted me to write my first blog post in ages! Sorry I’ve been away so long
I’m not a fan of the “Scaled Agile Framework” to say the least. Dean Leffingwell is in on this, a man who I generally find myself agreeing with. However this framework is a horrible, pure money-making bastardisation and Frankenstein of Scrum, Agile and Waterfall that is being sold to large companies who are too afraid to really change and just want to increase productivity, reduce defect counts, etc. and find a place in the “Agile” world for their managers.
The whole concept of iterating over a product rather than simply incrementing features is fundamental to Agile and Scrum but completely bypassed with this framework. Continuous delivery in order to tap into the market as early as possible and adapt the product is ignored (instead a 2-day release plan meeting is held in which all the features the PM wants done in the next 10 weeks are broken down into user stories and put into Sprints – yuk).
There is even a “hardening Sprint” which is a fancy term for a 2-week phase for bug-fixing and deployment activities because companies “really need it” (read it’s too hard to truly get things “done done” so we’ll leave time for it at the end – of course “the end” is a deadline date based on an estimation of how long all the features will take to build – i.e. guesswork around fixed requirements – ring any bells?). Yuk yuk yuk!
Scrum scales perfectly well without this framework, thank you very much! Each product has a backlog, which is derived from an overall program backlog at the portfolio level. Each product has 1 to many synchronised teams – done! Why synchronise the whole frigging organisation’s product development?! Yeah like that will work. Means any one team can’t adapt their process because it’s locked in to the organisation’s “Agile” framework.
Scrum-at-scale is far better because it holds true to the founding principles of Agile and Scrum but also allows hundred of people to be working together towards a common goal. If the business needs to change program priorities then they can do because they are doing Scrum! Simply cease work (if required) on the product or work stream that is being moved down the backlog at the end of the next Sprint and start the team (or a different team) on the new product.
Rant over – for now! Be interested to hear what others think.