Life, Waterfalls and Plans

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

IMG_20140709_104612

My family and I awoke to yet another beautiful morning. Lake Tahoe glistened an almost unrealistic blue across the way from our hotel room balcony.

We only had a few days to enjoy the serene beauty of this place, so we made a plan. There were particular places we wanted to visit, and activities we wanted to make sure we had time to do before we left, so we felt it was important to plan how we would spend each remaining morning and afternoon of our trip.

This particular afternoon’s plan was to visit Emerald Bay, a beautiful piece of beach a few miles round the Lake from our hotel, and to rent a couple of kayaks. Before that we wanted to spend some time swimming in the hotel pool with the kids in the morning, followed by watching the first semi final of the World Cup over lunch in a nearby restaurant (the game turned out to be Brazil’s unexpected mauling from soon-to-be-crowned-champions Germany).

After the game we jumped in the car and headed to Emerald Bay. We pulled up at the parking lot at around 4.10pm. For some reason our 6-year-old son Thomas was concerned that the kayaks would not be available, and would have closed at 4pm. We reassured him that there was no reason to worry. However, just to be sure we decided to check before we embarked on the 1 mile walk down to the beach, so my wife Kathleen called the company.

Lo and behold, the kayak rentals closed at 4pm. My son was devastated, and Kathleen and I were a little disappointed too. We comforted Thomas, assured him we would come back in a couple of days, and started driving back to the hotel.

On the way back, we noticed a sign for Eagle Falls. Kathleen mentioned that she had read it was a nice little hike and might be worth doing, so we decided to stop. We started the rocky walk down towards the distant sound of gushing water. The warm sun beat down on us as the sound got gradually louder.

We eventually made it to the falls. Small by the standards of some of the World’s more spectacular falls, but an extremely beautiful little spot. We perched on rocks and dangled our feet into the fresh, streaming water. I cupped my hands and drank from the pool. A perfect drink on this warm day, after a tricky walk/climb down through uneven boulders, rocks and terrain to where we now sat.

I gazed across at nature’s pool. A mini-lake of sorts, with a waterfall delivering gallon upon gallon of crystal clear liquid into its heart. I thought to myself how wonderful if would be to take off my shirt and swim in this small piece of paradise, but talked myself out of it because there was a group of women sitting on other rocks close by. Embarking on this adventure would also require some nifty manoeuvring over rocks below the surface of the water to get to the deeper part of the pool, in which swimming would be possible.

I then started to ponder the circumstance I was in, and how I never would have come to this place had our day gone to plan. We would have likely had a beautiful kayak across the glistening lake, and a lovely afternoon. But we would not have been at these falls. At this natural pool. I would not have had the opportunity to swim in this tantalising water from which I had drank, satisfying my parched lips and my thirst for a special moment.

I looked over at Kathleen and said “I’m going for a swim”. I realised that these opportunities do not come along very often. The weather was perfect, the water was perfect, and the moment was perfect. I seized it. Thomas and my 4-year-old daughter Anya wanted to come, so we started navigating the rocks toward the pool, holding hands. I had never seen Anya so confident on such a tricky surface. She looked like a natural in the stony environment.

We finally made it to somewhere we could stop and leave our hats and flip-flops, and I plunged into the pure, rippling water. The sunlight majestically twinkled on the perfect surface, and I glided across the pool, happy as a newborn duckling on its first swim. Thomas and Anya watched from the side. The group of women, and a few other people who had arrived since, smiled over at me and waved. Kathleen took some photos, and the kids giggled with delight. I was in a little piece of heaven for those 15-20 minutes. I felt cleansed, happy and alive.

John Lennon sung the words “Life is what happens to you while you’re busy making other plans”. Regardless, wonderful things happen to us that were planned. Those fantastic holidays. Our wedding day. The birth of our children.

But we must also remember that amazing things happen that weren’t planned. In fact, some of the most memorable days in my life were not planned. The day at Eagle Falls. That day in Vanuatu when we stumbled accidentally across the most beautiful beach we’ve ever been to. That day I bumped into an old friend and ended up spending the day laughing with them rather than doing the things I was intending to do.

IMG_20140708_170613

Let’s make sure we allow plenty of time in our lives to head off the beaten track. To drive the open road. To immerse ourselves into the uncertain. Maybe, just maybe, something truly wonderful awaits us.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

My Slicing Heuristic Concept Explained

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

This is a concept I devised a couple of years ago, and it seems there is a new #NoEstimates audience that would like to know more about it.

A Slicing Heuristic is essentially:

An explicit policy that describes how to "slice" work Just-In-Time to help us create consistency, a shared language for work and better predictability.

Crucially, the heuristic also describes success criteria to ensure it is achieving the level of predictability we require.

The Slicing Heuristic is intended to replace deterministic estimation rituals by incorporating empirical measurement of actual cycle times for the various types of work in your software delivery lifecycle. It is most effective when used for all levels of work, but can certainly be used for individual work types. For a team dabbling in #NoEstimates, a User Story heuristic can be an extremely effective way of providing empirical forecasts without the need for estimating how long individual stories will take.

However, if you are able to incorporate this concept from the portfolio level down, the idea is that you define each work type (e.g. Program, Project, Feature, User Story, etc.) along with a Slicing Heuristic, which forms part of that work type’s Definition of Ready.

For example,

"A feature ready to be worked on must consist of no more than 4 groomed user stories"

or

 “A user story ready to be worked on must have only one acceptance test”.

The success criteria will describe the appropriate level of granularity for the work type. For example, you might want user stories to take no more than 3 days, and features no more than 2 weeks.

Here is the really important part. The idea is not to slice work until you estimate it will take that long. You never explicitly estimate the work using the Slicing Heuristic. Instead,  as the work gets completed across the various work types you use the heuristic(s) to measure the actual cycle times, and then inspect and adapt the heuristic(s) if required.

At the user story level, I’ve found the “1 acceptance test” heuristic to be consistently effective over different domains for creating an average story cycle time of 3 days or less. However, there are alternatives. Instead of acceptance tests you could  use e.g. number of tasks:

 "A user story must have no more than 6 tasks".

Here is an example Slicing Heuristic scenario for a Scrum team using the feature and user story heuristics described above:

  • Product Owner prioritises a feature that she wants worked on in the next Sprint
  • PO slices feature into user stories
  • If feature contains more than 4 stories, it is sliced into 2 or more features
  • PO keeps slicing until she has features consisting of no more than 4 user stories; they are now ready to be presented to the team
    Note: Unless this is the very first feature the team is developing, the PO now has an estimate of how long the feature(s) will take, based on historical cycle time data for the feature work type; no need to ask the team how long it will take
  • In Sprint Planning, team creates acceptance tests for each user story
  • If there is more than 1 acceptance test, story is sliced into 2 or more stories
  • Team keeps slicing until all stories consist of only one acceptance test
    PO now has an even more reliable forecast of when the feature(s) will be delivered because she can now use the user story cycle time data in conjunction with the feature data
  • Team delivers each story, and records its cycle time in a control chart
  • If a story is taking longer than 3 days, it is flagged for conversation in Daily Standup
  • Multiple outliers are a sign that the heuristic should be adapted in the Sprint Retrospective
  • When the feature is delivered, its cycle time is measured also
  • Again, if features are taking longer than is acceptable for the heuristic, the heuristic should be adapted to improve predictability (e.g. reduce maximum number of user stories per feature to 3)
Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

9 Ways Customer Value Is Not Killing Your Business

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

The title is somewhat parodic. I hope it is taken in the intended spirit :)

The reason for the choice of title is that this post is a response to Bob Marshall (@flowchainsensei)’s “Eight Ways Customer Value Is Killing Your Business“, which itself was a rejoinder (at least in part) to a tweet I made the other day:

"Everything starts w/ understanding our customers & their problems. Economic value for an org is created by focusing on this simple premise."

~ Neil Killick (@neil_killick)

I will address some of Bob’s comments directly:

"A blind faith in the idea of “customer value” can cause many more problems than it solves."

Not sure where this quote is from, but I see it as a straw man statement. I don’t think blind faith in an idea solves any problems at all, let alone fewer than those it causes. Blind faith in any idea is damaging, and silly. So let’s move on from this.

Some folks seem to believe in “customer value” like it was the New Church.
The idea appears to have transcended logical enquiry and consideration, and become some kind of sacred cow. So be it. I do not subscribe. I guess that makes me an apostate.

These comments seem needlessly disparaging towards people who dare to put the customer at the forefront of what they do in their work. Even if a focus on customer value were somehow economically unsound, or otherwise ineffective, it is at least a noble focus, given some of the alternatives (profit, efficiency, utilisation, world domination, etc.)

Bob continues:

My view? I see organisations that focus on customer value putting their business in jeopardy. Of course, there are numerous other ways to do that, too. But this particular path seems deeply ironic, given the number of self-styled experts who hail “customer value” as the salvation of business.

As does the scornful tone. Who are these jeopardous, ineffective companies focusing on customer value? Who are these “self-styled experts” teaching business owners how to save their business by turning their attention to, god forbid, their customers?

So, here are eight ways in which an incautious and credulous emphasis on “customer value” can undermine business success:
  1. 1. If you don’t mean it

Well, yes, of course. An “incautious and credulous” emphasis on anything can undermine business success.

Further, if we say one thing and do another, focus on the first thing isn’t actually happening. Is Bob implying here that by not meaning the thing we say we’re focusing on, we’re focusing on the wrong thing?

I’m pretty sure this is an incorrect assertion.

  1. 2. What about everyone else?

OK, this is a fair question. If we focus purely on “customer value” then we may let down internal stakeholders, and create a shitty environment for our colleagues. That is, if we indeed focus purely on customer value.

But that’s not what I meant in my tweet. Let’s revisit it:

"Everything starts w/ understanding our customers & their problems. Economic value for an org is created by focusing on this simple premise."

I am asserting that there is a simple premise on which to focus. What is that premise? Not actually “customer value“, but instead that “everything starts with understanding our customers and their problems“.

Some may see these as the same thing, or the difference subtle. I see them as very different.

Talk-to-your-customersThe work we do for an organisation, whatever kind of organisation it is, must deliver some kind of value for that organisation, if indeed we wish the organisation to be effective at meeting its goals.

This value can take many forms, and these forms will largely depend on how the business is currently performing and its corresponding strategic focus.

This is always true, whether we have customers or not.

If our organisation is one that makes money by having customers, we must provide something that our customers or potential customers do not already have. This could be a revolutionary product, or simply serving a need more completely than a competitor product does.

Either way, the only path to economic viability and being a sustainable, customer-facing business is to identify the problems customers and potential customers have achieving their goals, and which of these are valuable enough for our business to spend time and money solving.

Or, to put it another way:

There is no point delivering customer value if we are not delivering business value.

Focusing on understanding and solving customer problems is not the same as focusing only on customer value and ignoring everything else. The latter is plain silly. We can’t sustain a business if we are making customers happy at the expense of our employees’ welfare, or our stakeholders’ sanity.

Employee and stakeholder needs must always be considered when determining business value.

and further:

Business value must always be considered when determining customer value.

The concept of meeting the needs of stakeholders, employees and customers is not — or at least does not have to be — at odds with a focus on understanding customer problems.

On the contrary, I see understanding customer problems as the most effective driver for delivering business value, happy customers, employees and stakeholders.

Number 3 in Bob’s list:

3. Narrow Definition of “Customer” and “Value”
Who do you regard as your customer? How do you decide what is of value to them? Do you define customers as (just) those folks that sign the cheques? And do you define value in terms of simple hard cash? If so, what about all those other folks who suffer your goods and services without a voice? And what about their (non-cash) experiences?

Yup. If we have a narrow definition of “customer” and “value“, we can hurt our business. But having a customer-focus does not imply we have a narrow definition of customer and value. We might have that whether we focus on the customer or not.

4. Confusion of Value Disciplines
Michael Treacy and Fred Wiersma describe three generic value disciplines: operational excellence,customer intimacy and product leadership in their book The Discipline of Market Leaders (1997). They go on to make the case that any given business can and must focus on just one out of these three disciplines. Many organisations have yet to realise this.

I’m not sure how being customer-focused is at odds with this. Again, if you cannot discern what business value means for your business at any given time, you are in trouble.

We should only solve customer problems that deliver business value.

If we spend time determining and solving customer problems that do not deliver business value then we will not be as effective as we can be, or we might go out of business. That business value might be derived from, at any given time, operational excellence, customer intimacy or product leadership.

But each of these things only matters a jot if we have customers with problems to solve.

5. Unintended Consequences
In his book “Obliquity”, Alan Kay makes the case for approaching one’s goals obliquely. Rushing headlong at “customer value” can often result in many unintended consequences. A more indirect approach, such as providing value to customers by building an organisation or workforce with the capability to do so “baked-in”, and evolving continuously, can avoid many of these unintended consequences.

Rushing headlong” into anything is silly, even if that thing is “customer value“. When I talk about being customer-focused, I mean the systemic, “baked-in” and “evolving continuously” kind that Bob speaks of, not the oblique, vacuous way.

6. Choosing the Wrong Kind of Value

Bob argues that there is a “right kind of customer value“, and that it is unlikely that folks will deliver on that.

Well, firstly, yes. If we want to deliver customer value, we ought to mean the “right kind“. The holistic, long term, effective kind. Not the short-sighted, ineffective kind.

Secondly, it is very important, nay critical, that we work on the system such that folks understand the business goals, and how solving particular customer problems pertains to these goals. If we do not have aligned goals across our company, we will be ineffective, regardless of whether they are customer-focused goals.

Such a dysfunction is not caused by being focused on understanding and solving customer problems.

7. Conflating Means With Ends
In his book The Goal, Eliyahu Goldratt asks the fundamental question “Why are you in business? What’s your goal?” Having happy customers is a means to a commercial organisations’ goal, not an end in itself. Yes, even a necessary means (see: Necessary But Not Sufficient). But not sufficient.

Focusing on “happy customers” is not the same as “understanding customer problems“. Regardless, neither of these things is sufficient for a profitable, sustainable company. But there is no one single thing that is sufficient on its own, so this is another straw man statement.

The fact remains:

An organisation’s commercial goal can only ever be reached by solving customer problems that derive enough business value.

Our revenue, and consequently profit, is generated by our customers. If we have no customers, we have no revenue. No profit. The buck for our success starts and stops with having enough customers forking out enough of their cash in return for us continually solving their problems.

8. Strangles Innovation
Focusing blindly on customer value can drive short-termism in the organisation, because the connection between longer-term investment in e.g. innovation and the customer value of such proposed innovations is often hard to see.

This is an interesting one. The short-termism Bob speaks of is certainly a risk, but I would hardly cite customer-focus as a primary cause of this innovation strangulation.

If anything, what I see more frequently is an obsession by folks in senior management to solve the company’s problems (e.g. cost-cutting, how can we acquire customers, how can we get people to come back, etc.) rather than the customer’s problems.

I see this reverse tactic as far more damaging than focusing only on solving customer problems.

While, as I’ve described, a pure focus on customer value can be ineffective, it sure as hell stands a better chance of effectiveness that focusing on something that isn’t about the customer.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Beyond #NoEstimates – Why the traditional software contract must die

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

doctoryorkeIT’S TIME FOR A SEQUEL TO #NOESTIMATES

The #NoEstimates debate/movement is in its 3rd year. Many of the principles upon which it is founded were being used by practitioners many years earlier. The Agile Manifesto, along with its statement “Customer collaboration over contract negotiation”, is approaching 13 years old.

I feel it’s time to start addressing the “barriers to entry” of #NoEstimates. I (and others) have banged on enough about the whys and wherefores of not estimating. Let us now start discussing ways of solving the problems that force us to estimate, or at least give us the perception that we must.

I’d love it if the Agile community would join me!

THE DIFFICULT 2nd ALBUM

In order to be able to work this way, there are fundamental impediments that we must remove.

And one of those is the traditional software contract. It may be the most important of the impediments, particularly when we are talking about doing work for external customers.

The way such contracts are negotiated and executed remains largely driven by up-front thinking such as plans, schedules and estimates, and tainted by paranoid contingencies for failure to deliver things to the letter.

Both #NoEstimates and Agile promote continuous value delivery, i.e. to always be building the next most valuable thing for the customer, and delivering this in rapid vertical slices. With such a desirable working agreement in place, the need for a fixed price contract built around an up-front estimate of everything to be delivered is negated.

But in order to be able to work in such a world we need to build trusting, flexible and collaborative working relationships with customers. Some have made attempts to create more “Agile” contracts, but legal constraints coupled with a resistance to changing long-established practices often render these as not hugely different in essence from traditional contracts.

That is, most of the attempts I’ve seen at Agile contracts remain constrained by the legacy of analysis, design, development and testing phases, a focus on delivering agreed scope (output) rather than solving a problem (outcome), and a desire to try and eliminate uncertainty rather than embrace it.

SO, WHAT NOW?

I believe we need something more radical, and I would like to explore this with any interested folk. To my mind, the whole premise behind the traditional software development contract – nailing down the detail of what will be delivered, when and for what price, and covering both parties legally should things go wrong – needs to be addressed.

For customers to gain the benefits of Agile, and for suppliers to be able to deliver with Agile, working agreements and relationships that truly embrace an iterative, incremental and emergent approach to building software need to become the norm.

Working agreements that start from a position of trust rather than distrust. Working agreements that embrace “Here’s what we will build together” over “Here’s what you must deliver or we’ll sue you”. Working agreements that allow parties to use empirical process control to manage risk and uncertainty rather than up-front predictive models that make significant change impossible.

BEYOND CONTRACTS, TOWARD AGILE WORKING AGREEMENTS

How would I structure working agreements with customers that are compatible with Agile and #NoEstimates principles?

I think we need to move toward a model that embraces the following:

  • All work is custom work
  • Setting expectation level of quality for the customer’s budget or desired timeframe using an approach akin to a designer’s portfolio
  • Focus on “build the right thing” rather than “build the thing we think is right”
  • Replacing fixed price with incremental pricing (including “cut the cord at any time”)
  • Replacing fixed scope with required outcomes (which we are happy to change)
  • Reducing risk by using true iterative development cycles (Build – Measure – Learn) and empirical process control to evaluate progress toward outcomes
  • Only thing nailed down up-front is an agreement that the supplier will continually deliver the most valuable small increment of product to the customer
  • Don’t just welcome change – embrace changing requirements, via a change of mind or emergent learning, for the customer’s competitive advantage

DON’T BE CONSTRAINED BY THE CONTRACT

I have been a part of teams delivering hugely successful outcomes in an iterative fashion, despite the apparent constraint of a traditional contract.
How?
By building a trusting, collaborative relationship with the customer and continuously delivering, adapting and delighting.
In such a situation, the contract becomes irrelevant, because the actual day-to-day working arrangement and relationship has trumped its significance.

WHAT NEXT?

I’d like to explore this topic by starting with the following assertions:

  • Traditional software contracts render attempts to deliver in an Agile way moot
  • Empiricism and iteration are extremely beneficial to delivering successful projects, but are absent in both traditional and (previous attempts at) “Agile” contracts
  • Agile working agreements must allow parties to benefit from the risk management and other advantages that empiricism and iteration provide over scope/price-driven contracts
  • Existing/previous ideas for Agile contracts don’t quite fit the bill

Who would like to join the discussion?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

What if I can’t work with #NoEstimates?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

svTRANSPORT_wideweb__470x309,0Recently I’ve been helping out a friend with some questions about how he and his team can deliver value more quickly in his organisation, which works with a traditional SDLC.

He describes the process as “pretty much solid waterfall” and finds it hard to see how Agile methodologies would allow them to deliver benefit early. Most of their projects are, as he describes it, like an “iceberg”, with huge amounts of data integration and enrichment invisible below the surface before the internal customer finally sees the “tip” with a report or a dashboard/OLAP cube.

The requirements are fixed, and the customer is not interested in seeing partially finished solutions. They want it ALL, or it is not deemed valuable to them.

My friend envisages delivering multiple sprints with the customer getting decidedly impatient because they still can’t see anything useful. And then eventually in one of the later sprints the customer finally gets their dashboard. This doesn’t really solve their problem.

He is also interested in the #NoEstimates debate as the team seems to spend a lot of time estimating effort and, other than the estimates being used to raise a purchase order, the process doesn’t really make any sense to him. He wants to consider alternatives that can be used to give some scale to a project and enable the customer to raise the purchase order to get the work authorised. He wonders if this might be difficult to implement as the organisation is so cost (money) focused, but he at least wants to be able to float a few ideas.

He says there was a request from upper management for the delivery team to be more “agile” but, given the lack of desire for iterative/incremental delivery from anyone outside the team, this request seems to infer the old classic “we want our team to be agile” meaning “we want our team to deliver more quickly but don’t want to bother ourselves with sprint reviews every 2 weeks to give feedback on what they’re building“.

I explained to my friend that these kinds of cultural issues are not easily or quickly resolved. It sounds cliche but Agile transitions are a journey and require a buy-in across the organisation that things need to change, and a willingness to be open to do this. I told him that he will find himself in a frustrating bind if he tries to take a bottom-up approach to agility. Those that are failing to see the value in an early and iterative approach to the delivery of features are the ones that need to understand the benefits before any lasting improvements to effectiveness can be made.

This feeds into estimation too. If he can’t work iteratively, delivering in small chunks, he can’t really empirically measure progress toward the goal, leaving him with no choice but to be predictive and deterministic. If he can’t measure progress it’s difficult to judge if he’s “on track”, and will find himself continually estimating remaining work in order to update the project’s status.

So what can my friend do in this situation?

One thing he can do is to try and continuously deliver vertical slices regardless of whether the customer is interested or not. Put the system into a production-like demo environment. If the customer doesn’t want to look at it, no worries. It will at least allow the development team to iterate in their thinking over the design of the solution, and to measure their progress against a T-shirt sized backlog of work.

Something along the lines of:

  • Here’s the backlog of work that needs to be done (try to make this goal-based rather than solution-based; solution can be agreed Just-In-Time)
  • If absolutely required to approve the purchase order before the work starts, give an up-front estimate range based on how long other similar projects took and Maynard Keynes’ premise “it’s better to be roughly right than precisely wrong
  • Split “stories” into S, M, L and XL by comparing each to one another (relative sizing) rather than trying to determine how long something will take, i.e. “A seems bigger than B, C even bigger, D is about the same as C“, etc.
  • Build one story from each size bucket (while delivering vertical slices if possible) to start building empirical data about how long stories within each bucket might take; from this a predicted end date can be extrapolated
  • Put every new story that emerges into one of the size buckets by comparing it to a story that has not yet been built
    • The “not yet been built” bit is important so that estimators don’t get influenced by how long a particular story took
    • There will be natural variation around how long things will take, so to reap the benefits of relative sizing and empirical process control it is important not to judge the size of a task based on the result of a statistical outlier
    • I wrote a blog post about this recently
  • The more stories that are delivered the better the data will become – it will take several stories before a settled delivery rate and thinner cone of uncertainty is achieved
  • Be transparent; Update the data and the stakeholders regularly, and warn early if the data is showing a high risk of not delivering within the allocated budget so that the appropriate steps can be taken and there are no surprises late in the game

I’ve written previously about the “barriers to entry” for working with #NoEstimates, and my friend is certainly encountering some of these barriers. However, there are always steps we can take to improve the way we do things, and sometimes these improvements influence others in a positive way.

It’s easy to just accept the perception that “they don’t want to change“, but mindful action in baby steps toward better can make a huge difference.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

How much will it cost and when will I get it?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Here is an excerpt from an email I received from a nice chap recently:

Finally I took the time to read all your posts on #NoEstimates and I must say it was really good :-) 

One thing that I didn't understand was how you handle the start of project with no time or budget constraints? Say we have a customer who wants to build a flight booking system and are now wondering how much it costs and when it will be done? They have no desired delivery date but asap is good and they need to know how much money it will cost so they can give a budget proposal to their boss. Here it seems like we have no real constraints to use so how would you handle this situation with #NoEstimates?

Also another situation that the customer maybe want this and say they have a budget for $10,000 then we "know" it's too little to get anything useful. Is it "ok" to say that "estimate" based on our experience?

good-cheap-fast-services-20090701-122253I thought it would be interesting for my other #NoEstimates readers to turn my answers into a short blog post :)

To the 1st question: I would be surprised if the customer genuinely doesn’t have a budget constraint. If we have built flight booking systems before, I would tell them how much those ones cost, to give them options. I would be wanting to know from the customer how much they want to spend.

If we haven’t built flight booking systems before, I would be wondering why the customer has chosen us over someone who builds flight booking systems! If they really want to work with us then I would be explaining the way we work with our customers, i.e. an iterative, collaborative approach with flexible pricing, but I would insist on knowing their maximum budget.

If we want to be in the business of building flight booking systems then I might take a hit on the first customer, i.e. if after working with the customer closely for, say, 2 weeks we realise that we can’t possibly build a flight booking system for their budget then they can look elsewhere, at no cost to them. It will be valuable learning for us because now we’ll know how much we *can’t* build a flight booking system for, given our capacity.

Obviously there’s no one right answer to this, and there are so many “it depends” variables, but this is one approach. We need to learn our capabilities by doing stuff and making empirical judgements, not guess how much we can do and by when.

To the second question, I’ve probably already answered it. If we’ve never built a flight booking system for anywhere near $10k then we “know” we can’t do it, or at least can’t guarantee it, given our current capacity. Call this an estimate if you like. If we’ve never built one at all then my answer above applies.

I always find it interesting that in software we tend to take a reverse approach to budgeting as we do with our purchasing decisions in life. Instead of deciding our budget based on how much we have available, or are willing to spend, we decide it using an estimate from the supplier of how much they tell us the software that we want will cost.

Or at least we give the illusion that’s how we are deciding it. In reality, I believe we typically know how much we want to spend. We are simply looking to the cheapest bidder in order to try and save money or squeeze as much as we can for our real budget.

My view is that we will come to much better outcomes for our customers, and ourselves, if we actually work iteratively and collaboratively with the customers’ real budgetary and/or time constraints.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Context is no longer King

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Status+Quo+-+Status+Quo+-+LP+RECORD-359951One of my frustrations as a software practitioner is our seemingly programmed human bias toward keeping the status quo.

I guess it wouldn’t be so bad if the status quo was actually something approaching effective, inspiring or at least motivating. But unfortunately the reality for many (most) people making their living in the crazy (in a bad way) world of software development remains one of boredom, dysfunction, wasting time on unimportant things, going along with stupid decisions (or lack of them), stress, hatred of Mondays, being put in our place by our “superiors”, et cetera, et cetera.

23,858 tweets and counting. Worthwhile or a colossal waste of time?

I tweeted this yesterday. Often I wonder why I stay in an industry that suffers from the afflictions listed above. My work mood swings from utter dejection to tremendous elation. Like the software we create, the variability in my mental state is subject to wild fluctuations.

Here’s the thing. The reason I do this; the reason I stay in the industry, tweet opinions, tips and debate; the reason I write these blog posts; the reason I give a significant portion of my time freely, mostly at my own cost, to talk at meetup groups, conferences and company brown-bag lunches; is…

Because I want to play a small part in creating a better world of work for those involved in software development.

Particularly developers, who I believe have been treated for years like some kind of underclass in organisations of all sizes and industries. Crammed like sardines into some dark, dingy corner of the building, given to-the-letter specifications of some crappy software system that will keep them busy for a few months and then will never be used by a soul. Forced to commit to an estimate of how long this will all take (minus whatever needs to be trimmed off because the estimate doesn’t fit into the already agreed timelines). Constantly being micro-managed and asked “why is this taking so long?” and “why is this so hard?”.

Yes, I’m angry about this. And I want things to change. So I’m trying to do that in my own little way.

I want us to start treating smart, motivated people with the respect they deserve – right from the moment we hire them. Why on earth companies put engineers through 3 or 4 rounds of interviews and then fail to actually trust them once they get the job is beyond me. Managers continue to spoon feed solutions to their subordinates because they “can’t be trusted” to solve business problems quickly and efficiently enough.

This is why I am challenging the status quo in our industry. Sometimes what I write or say is found provocative by some. One dimensional. Context-less. “It depends on the context”, people say. “There’s no one right way. No advice is universal.”

I get disappointed (sometimes annoyed) when people who have never met me and know nothing about my professional reputation and abilities confuse what I tweet as “professional advice”, and then start questioning my integrity and ability as a consultant. It is hypocritical and way off the mark.

The reason why people write blog posts with provocative titles, and tweet with controversial hashtags, is because it is interesting. It invites conversation and debate. It stirs things up a bit. God knows (and so should the rest of us) that this industry is in dire need of some stirring up.

Context-is-kingI was questioned by a couple of people about a tweet I wrote recently:

In fact my tip is NEVER do a MoSCoW prioritisation. The implied fixing of scope makes change very difficult. Order things instead. #Agile

A tweet, I might add, that was retweeted dozens of times, so obviously resonated with many.

I was told that my opinion was “unjustified”. That I shouldn’t make “categorical statements”. That “never is a long time”. That some poor soul may take my advice (assuming a tweet constitutes professional advice?!) and destroy a project because I am uninformed about their “context”.

I am constantly told the same kind of things about the #NoEstimates debate. That I can’t tell people not to estimate because I don’t know their context. Their boss might need estimates. Sometimes we need them, sometimes we don’t. Et cetera, et cetera.

With all due respect to these people, they are completely missing the point. For a start, I think it’s ridiculous to suggest that people would read a tweet from little old me and that would somehow create a chain of events that would destroy a project. Even if I were someone with anywhere near the influence and expertise of the great Ron Jeffries or Kent Beck, I don’t think I would yield that kind of power over people.

I do not use Twitter to dish out free professional advice. It is a forum for opinion, conversation and debate. Well written tweets resonate with people in some way, such that they retweet them, favourite them or, preferably, start conversations about them.

Perhaps reading a tweet like the one above will encourage someone to think a bit more about a practice that they have always done without question. To look into alternative ways of organising and prioritising work. To completely reject what I’m saying. Good tweets create a reaction, and whether this reaction is an angry disagreement or a nodding of the head, it has done its job.

Twitter is not to be taken too seriously, but the conversations it can create are serious and, I believe, are helping us as an industry to increasingly question long established practices. This can help us improve the way we work. The way we think. It is vitally important for us to have our world view challenged on a regular basis. This is how we learn and evolve.

I don’t just want to read tweets saying that “it depends on context”. Stuff that confirms my world view. Stuff that I agree with all the time. If every piece of advice or opinion “depends on context” then we might as well just give up trying to improve things.

Depending on your context, you might want to consider alternatives to MoSCoW prioritisation. However, if it works for you then fine, just keep on doing it.

Politically correct, perhaps, but it’s not exactly going to give me a reaction. I’ll probably not even notice that tweet on my timeline. “Be happy”. Ooh, can’t say that, it depends on context.

BulbMoving away from social media for a second and into the real world of professional coaching and consulting – As Agile coaches I believe we can do much, much more for our clients. If someone tells me that I’m being unprofessional for suggesting better alternatives than MoSCoW then we are on different planes, I’m afraid. I know that there are certain principles and practices that have proved effective for me time and time again.

I’m not alone on this. I believe some statements are universally applicable, regardless of context. Questioning the way we do things doesn’t depend on context. Respecting each other and striving to work more collaboratively doesn’t depend on context. Adopting good engineering practices will help you to deliver incrementally and iteratively at a constant pace over time – this is universally applicable also.

Of course context is important – to me that’s so obvious that I can’t believe people keep saying it. We know that. It goes without saying.

But it’s not the point. The point is that many, many companies are still struggling to grasp the principles and practices that we in the Agile and Lean community know can increase effectiveness. Our clients deserve better advice from us than “well, if that’s working for you then keep on doing it”. We all know that something “working” is a perception and may actually be destroying the morale of the employees, or even putting the business as a whole at risk.

It is not “professional” for us to keep playing the context card. We need to be bold in our decisions and advice giving. Take risks. Challenge the status quo. Encourage innovation, not just of products but of process also. Be a true change agent, not just blend into the environment.

If you like what I tweet and blog, that’s wonderful so please do keep following! If you don’t like it, please unfollow. Twitter is wonderful because it is the ultimate pull system. If we don’t like what we see we can block and unfollow. We can filter out content that doesn’t interest us. It’s brilliant. And I shall continue to use it to challenge, provoke and generate conversation and debate. I cannot begin to measure how much I have learned and evolved my thinking thanks to conversations on, or starting on, Twitter. I’m pretty sure others will say the same.

And I will continue to help clients, in their context, get better whilst trying to create happy and humane workplaces. I want to live in a world where people enjoy going to work. It’s time away from our family and friends, and we spend most of our time there, so for God’s sake if we’re not enjoying it then what are we doing?

I don’t get it right all the time. Probably not even most of the time. But I do this because I care. I will continue to risk getting lambasted by people and losing the respect of gurus and experts. Like the rest of us, I don’t know it all – far from it. But I do not learn by being uncontroversial and not pushing the boundaries of what I believe or how I think things should work.

Thanks for listening :)

Note: I will write a follow-up post about  MoSCoW prioritisation itself. Aside from the fact that it perpetuates the myth of “requirements” (if something is not a “must-have” then how can it be a requirement?), I’m not including my further ideas on the topic here because it’s not really what this post is about.

Many have already written about the damage it can do and some better alternatives to set you on the road to delivering a successful project (read building a successful product). For starters, Joakim Holm wrote a great post about it the other day. And there’s lots more to investigate using our friend Google!

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

False Gods, Silver Bullets and the Myth of Innovation

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

I am privileged to publish this guest post from the brilliant and lovely Michael Rembach (@mrembach).

divider

In October I stumbled across a blog article about product development using Scrum and the hindering effect that Scrum can have on the innovation process especially if the organisation is fully ‘agile immersed’.  The blog was written by Brian de Haaf (@bdehaaff) who is the co-founder of Aha! – A product management software company.  While the article was well written and brought up many salient points about innovation, I disagree with the overall premise that Scrum may have innovation-limiting behaviours  You can read the original article here zite.to/17HnE4S .

The first thing I’d like to point out is that I agree with the points about innovation in the article.  Innovation practices, such as having a shared vision, engendering trust in your organisation and having a strategic direction are all vital ingredients for success and even more so in technology companies.  The thing about innovation is that it’s a cultural thing and no framework/methodology/philosophy in the world is going to make your company innovative without the desire (or need) to. Having a myopic view of your product because you’re ‘Agile’ misses the point of the delivery focus and discounts the innovation-enabling practices that Agile encourages.

Scrum, and other Agile methodologies, are essentially delivery focussed which is why there is a requirement for product owners to focus strongly on the Sprint cycle and the short-term delivery timeline that it brings.  However, this does not and should not excuse the product owner for not checking that what is being delivered is aligned to the strategic goals for the product or in fact, the organisation.  The two aren’t mutually exclusive and a product owner is responsible for communicating that vision to the project team so that they are aware of the purpose of the product.  Constantly checking in with the vision by all the team should ensure that what is being built doesn’t deviate from the intention of the product’s purpose.  The product owner is simply not performing her role properly if she suffers from the myopic concern with delivery-cycles without also ensuring that the product is meeting its intended strategic objectives.

Rather than inhibiting innovation, I posit that Agile has a number of practices that encourage innovative behaviour:

    1. MVP – the primary reason for creating a minimum viable product is to determine that what you’re trying to produce is viable, but it also serves a couple of other important purposes.  The first is prototyping; where you have the opportunity to experiment with your solution, try something small and novel and see if it works and the second; it gives you the opportunity to solicit feedback from your clients, the product ecosystem and anywhere else.  This is a primary source of knowledge for decision-making.
    2. Fast-failure – Agile methodologies allow you to fail quickly and learn some valuable lessons before it costs you too much.  Innovation is all about finding out new ways to do things and failing fast and safely is one of the best ways to forge new paths.
    3. Continuous learning through retrospectives – a learning organisation is an innovative organisation and retrospectives provide an excellent opportunity to improve not only what we are producing (again, you can look at the strategic alignment at the end of every sprint or release cycle), but also how we work together.
    4. Embracing change – if making changes to your product is painful then your ability to be innovative will be too.  Agile methodologies accept that change is inevitable from the get go and therefore provide less resistance to innovating during the development of a product.

Innovation is difficult at the best of times.  As Clayton Christensen illustrates in his famous Innovator’s dilemma, history is filled with the burnt out shell of successful companies that died as a result of not being able to change.  To succeed, innovation needs to be part of the organisations culture.  The premise that progressive change-embracing frameworks like Scrum inhibit innovation does not recognise these aforementioned practices.  Agile won’t make you innovative, but it sure can help encourage it.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

#NoEstimates puzzle experiment in Melbourne

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Craig Brown and I ran a variant of Chris Chapman’s now famous jigsaw puzzle experiment at a meetup of the Melbourne Agile and Scrum User Group on Wednesday evening.

Everyone had fun and was intensely engaged throughout. There were loads of interesting dynamics emerging from the teams, perhaps surprising given the contrived nature of the experiment.

600_301226032Set up

  • We set up three same-sized (10-12 people) teams, each with:
    • an identical jigsaw puzzle (way too big to be completed)
    • a Product Owner (to provide the vision and direction) and
    • a Scrum Master (to help the team achieve the PO’s vision)
  • We opted for 3 * 15-minute iterations, with 3 minutes for a Retro in between
  • Each team was told to use a different method - one was a Scrum team, one was a “mob team” and one was a “no rules” team. Here’s what that meant:
Scrum team
  • Must have Planning (including estimation), Review and Retro in each iteration
    • We provided Planning Poker cards for the estimation but the team was free to choose whatever estimation method they liked
  • Must only work on “stories” agreed in Planning – new stories can’t be introduced mid-iteration
  • Stories are only “done” when PO accepts them (in Review or before)
“Mob” team
  • No formal ceremonies required
  • Team all works on one story at a time until “done” (single-piece flow approach)
  • No estimation
  • Retro encouraged but not “enforced”
“No Rules” team
  • Can work like the Scrum team, the Mob team, any combination of the two, or any other way they like

Outcome

  • 600_301537052600_301537062Scrum team delivered most stories (3; the other teams delivered 2 each)
  • Whole group was asked to vote on which they thought was the best outcome
    • “No rules” team won (emphatically)
    • Scrum team lost

Interesting Observations

Here are some empirical observations of the evening’s events and outcomes, along with my interpretation of what they indicate in an Agile/#NoEstimates context (==> in bold italics underneath the observation).

Scrum team
  • Delivered most in terms of stories but least in terms of value, both for their Product Owner and as voted for by the wider group
    ==> Output <> Value
    ==> Comparing teams in a useful way would require consistent measures of both effort and value velocity across teams
     
  • Spent far too large a proportion of time (particularly the first iteration) in planning, and needed to be alerted to this fact
    ==> Consistent timeboxing is important to ensure there is time to do all that is required, and for less variability of outcomes 
  • A member of the team openly admitted that he inflated an estimate because he did not agree with the value of the story that the PO wanted to do next
    ==> Estimates are often gamed, and for various reasons
“No rules” team
  • Implicitly chose not to estimate, but instead to maximise the time they had for building
  • Eventually delighted their Product Owner (and wider group), but during the game the PO felt like:
    • The approach to delivery was too ad-hoc, even chaotic, especially at the beginning
      ==> Teams must collaborate in order to be co-ordinated, improve and deliver the right outcomes 
    • Stories were too large (epic) so delivery all happened near the end rather than incrementally
      ==> Smaller stories have lower variability and can help with early and frequent delivery, creating better predictability for PO/customer and lessening the need for estimates
      ==> Larger, higher variability stories rely on estimates of time, or at least relative size, to provide the illusion of predictability
  • Started with no process at all but this was deemed unproductive (with such a big team), so they split into smaller teams with focused goals
    ==> Smaller teams are more effective because it is easier to collaborate, change direction, gain consensus, etc.
General
  • Scrum and Mob team both delivered purely incrementally (concentrating on edges) rather than iteratively (identifying a recognisable area of interest and building upon it), although stories were clearly too big
    ==> An iterative approach is crucial for risk management, predictability and delivering the right thing (value), i.e. without such an approach you have no choice but to estimate
  • Product Owners all felt like they weren’t being listened to – this had particularly bad consequences for the Scrum and Mob teams, perhaps due to their purely incremental approach
    ==> Important for all team voices to be heard, especially given the PO is driving what should be built in order to deliver on the vision
Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Stand Up and Shut Up

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

dilbert-pig

As with many simple and now commonplace “Agile practices”, debates still rage on about the Daily Standup (Scrum) meeting, a meeting which has somehow become a ritualistic signal that a team is “Agile” but is often an equally conspicuous signal of the exact opposite.

I’ve been in many organisations where God forbid anyone asks whether we should get rid of the meeting, or even change it, despite the fact that no one is getting any value out of it every single goddamn day*.

*Except some managers. A daily status update meeting? Terrific! The Daily Standup is an opportunity to micro-manage people every single day without having to approach their desks!

I digress. The point is, people still question the value of the Daily Standup and, if it is indeed valuable, how we might make it more effective.

I share the view of the Scrum Guide on this – at least in what the spirit of an effective Daily Standup meeting is, if not necessarily the prescribed format.

An effective Daily Standup meeting, for me, is one in which the team inspects and adapts both product and process.

That is to say it is an alignment meeting. A daily planning meeting. An opportunity to change our path if there is a better one. We do not have to (and should not) wait for the Sprint Review (product) and Retrospective (process) for this. Continuous improvement is about daily inspection and adaptation.

Here are some of the more effective questions that can be used in a Daily Standup meeting:

  • How will we work together today to move toward our goal?
  • What should we focus on today?
  • What should we not do that we originally thought we would do?
  • How will we remove this impediment right now?
  • Given we are a little behind, how might we simplify this product increment?

It is about purposeful intent for the day. It is certainly not intended as a status meeting. If managers and others outside of the core team are not getting the information they require from conversations or the team wall then it will surely pay dividends to improve visibility and transparency in the way people interact while doing their work rather than have a daily status update meeting.

In fact, I would go as far as saying that the ritual of an unchanging Daily Standup meeting is usually a smell of poor collaboration in and between teams on the actual work to be done. Some companies mistake this meeting as a way of actually getting people to collaborate. It’s almost as if they think that the benefits of collaboration, as Agile promotes, can be gleaned simply by having this meeting.

standup-construction1Unfortunately it is not that simple. Standing (or sitting) people together does not make them collaborate.

Collaboration is an organic thing and only comes if the “way the work works” is designed to encourage it.

I sometimes see or hear the argument that, “because we’re Agile we should make the meeting fit with the way we currently work“, and that doing this will intrinsically make it more valuable. So, the argument continues, it’s OK if it becomes a status update meeting because that’s what the environment demands.

The issue with this approach is that the environment in which you currently operate is likely one of managers wanting status updates. One of traditional ways of doing things.

But in order to be effective with an Agile approach we have to do things differently. To think differently.

Agile does not mean “make compromises”. It is about mindful changes in the way we work to move toward improved effectiveness. If something feels a bit different and uncomfortable then it may well be a sign you are on the right track.

As coaches, we ought to let the team decide how they can get most value from a Daily Standup meeting. Then, rather than focusing all our attention on how to improve the meeting, we should instead be helping the managers create an environment in which actual collaboration (working together effectively toward common goals) is encouraged and starts to feel natural.

Where excellence, rather than dogma, can prevail.

P.S. Standing up is not mandatory! If the meeting is timeboxed to 15 minutes then it will be quick regardless of whether you’re sitting down, standing up or doing the cha-cha.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Revisiting the Three Amigos

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

three-amigosNext week I am speaking at a SIGiST (Specialist Group in Software Testing) event in Melbourne. Having to prepare my presentation has encouraged (OK, forced) me over the past couple of weeks to re-immerse myself in the world of quality, testing and BDD (Behaviour Driven Development).

Despite everything we’ve learned about the value of conversations when deciding what to build into our software — about the value of automating as much of our testing as possible in order to shorten the feedback loops between things breaking and us knowing about them breaking, and to instill confidence among the stakeholders and the team that we can rapidly add new features without breaking existing ones; about the value of taking a test driven approach to building our software, based on real user behaviour rather than code behaviour, to enforce good design practices and ensure the software does what it is supposed to do — I still constantly see and hear of teams struggling with their approach to quality.

Some are struggling to find time to improve due to a combination of legacy systems with brittle or no automated test coverage and looming deadlines for new products or features. Some are struggling to create a short enough feedback loop for testing software increments as they are built so that problems can be addressed before code is deployed, or before developers have moved on to the next or even the next feature.

There is no denying that it is crucial to get the technical practices right from the start. Enough has been written about this. BDD at all layers, continuous integration and automated acceptance and regression tests.

However, when you find yourself in a situation where you are adopting a legacy system or process – i.e. you or your predecessors haven’t got your technical practices right from the start – then your only viable option will usually be to improve things gradually. Have developers learn how to and implement automated acceptance tests. Chuck out and replace flaky record-and-play UI tests with robust unit, integration and browser tests using best-of-breed tools. Embed testers in the development team. Gradually start to do all the things that ideally would have been done from the start.

It seems like a desperate situation, but all is not lost. Far from it. I feel that a common mistake teams and businesses make is to place too much focus too early on the necessary technical improvements.

canvas_Manifesto-No.-2-resized-600In my experience, the most important thing to improve is the conversations between the business people, customers and the development team.

One effective technique for doing this is The Three Amigos approach, where the customer / Product Owner / BA has a chat with a developer and tester from the team to agree on the acceptance criteria for a new feature or story before it is undertaken. From this conversation the team can decide exactly what tests are needed, and where they should be implemented, in order to prove that the completed functionality will do what is supposed to do.

A mature Agile team would now write the necessary tests in their tool of choice (e.g. JBehave for Java), the developers would write just enough code for the tests to pass, then refactor. When all the acceptance tests pass, the story is considered “done” from a functional perspective.

But what if the tester and/or developers have little or no experience with an automated testing approach? I have worked with teams in this situation and it cannot be fixed right away (or even at all if there is no willingness from the business to invest in training and slack time to address the problem).

Let’s say the tester is traditional in his approach, and would typically create test cases which he will use to manually test the code when it comes to him from the developer. What tends to happen here is that the developer writes the code for the story, then hands it off to the tester, who then hands it back because the code doesn’t do what the tester expects it to do. This to-ing and fro-ing can happen once, twice, three times. It’s time consuming and frustrating for everyone, and makes it very difficult to complete product increments in a timely fashion.

Transform-through-conversations-photoHowever, if the tester and the developer have a conversation before the developer starts coding (with the PO/BA in the Three Amigos meeting, or just-in-time in a story kick-off), the tester can take the developer through his test cases (derived from the acceptance criteria) so that the developer understands everything that the tester expects to work when he is handed the code.

Over time in these conversations the developer will start making suggestions, so the test cases become more collaborative and thus effective. He will also want to make sure the story does not bounce back to him from the tester when he’s coded it, so he may do some more manual testing of the functionality or even write some (more) unit tests before handing the story to the tester. His confidence in his code is likely to have improved, and the bounce-backs become the exception rather than the rule.

The key to building in quality is first and foremost in the conversations because they create improvements in the way we work together, whatever situation we are in technically. The good technical practices will emerge from the better conversations. Agile is largely about focusing on technical excellence but, as the first line in the Manifesto tells us, more important is the interactions between the people doing the work. Continuous improvement allows us to start where we are and take one step at a time.

These up front and ongoing conversations, such as the Three Amigos, can have a massive impact on your effectiveness both individually and as a team, and on the quality and maintainability of your product, increasing your agility to adapt and innovate . Adding such conversations to your process is a great sign of continuous improvement and embracing the first and most important line of the Agile Manifesto.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Scrum Basics Part 2 – Monitoring Progress Toward a Goal

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

cumulative-img

"Various projective practices upon trending have been used to forecast progress, like burndowns, burn-ups, or cumulative flows. These have proven useful. However, these do not replace the importance of empiricism. In complex environments, what will happen is unknown. Only what has happened may be used for forward-looking decision-making."

-- Scrum Guide

Agile/Scrum teams are often asked to estimate how long a release might take. Or an entire project. Sometimes this is done under the guise of relative size estimates like T-shirt sizes – or, perhaps more commonly, story points – coupled with an estimated (or guessed) velocity. This is sometimes done even with new teams that have no velocity history.

Scrum, as defined in the Scrum Guide, places a large emphasis on the use of empiricism. Aside from the quote above, the following nuggets can also be found:

"Scrum is founded on empirical process control theory, or empiricism. Empiricism asserts that knowledge comes from experience and making decisions based on what is known. Scrum employs an iterative, incremental approach to optimize predictability and control risk."
"[The Scrum Master helps the Product Owner with] Understanding product planning in an empirical environment"

My interpretation of Scrum is that, while the Development Team are expected to estimate each PBI (Product Backlog Item), they are not asked nor expected to determine delivery dates, or how much work will be completed by a delivery date.

At Sprint Review:

"The Product Owner discusses the Product Backlog as it stands. He or she projects likely completion dates based on progress to date (if needed)"

So, the Product Owner uses the estimates on the PBIs combined with the empirical knowledge gained from what has actually been done to determine completion dates of a set of PBIs (e.g. a release). At no point does the Product Owner ask the team what will get done (beyond the current Sprint).

This use of empiricism is often neglected by Scrum teams. Teams are asked to project release dates, sometimes several months out, without any velocity history. This is not making projections based on what has actually happened. It is not empirical, and does not work in a complex, ever changing environment.

"A Product Backlog is never complete. The earliest development of it only lays out the initially known and best-understood requirements. The Product Backlog evolves as the product and the environment in which it will be used evolves. The Product Backlog is dynamic; it constantly changes to identify what the product needs to be appropriate, competitive, and useful. As long as a product exists, its Product Backlog also exists."

If you are using estimates, it is important that you use probabilistic estimates based on real, empirical data. Scrum suggests this. Practitioners suggest this also. Don’t ask the team to forecast any further out than the current Sprint. As the Product Owner, use real data to make forecasts and decisions. Asking the team to make longer term projections is not respecting the data showing what is actually getting done.

Monitor progress, don’t try and predict it.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Systems thinking and the naughty dogs

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

divider

walter-the-farting-dog-book-cover

Envisage this situation. I go to bed and forget to let the dogs out. When I wake in the morning, and go into the living room, the dogs have crapped on the rug.

Who is to blame?

My first reaction is likely to be to blame the dogs. “Sammy! Jake! You dirty dogs!”

My wife will likely blame me (once she finds out I didn’t let Sammy and Jake out).

What’s next?

Well, I don’t want that happening again. How can I make sure I don’t forget to let the dogs out again? Another foul up (forgive the pun) will be difficult to take.

Perhaps I could put a sign up on the wall in the landing, on the way to my bedroom: “DON’T FORGET TO LET THE DOGS OUT!” Won’t be foolproof, but it might help. My wife might decide she can’t trust me to let the dogs out every evening, so she will start reminding me every night, or coming into the living room to check.

Of course she might forget to do this one night. If that happens to coincide with a night on which I also forget, the same outcome may occur.

Now who’s to blame?

This kind of scenario might sound oddly familiar if you work in an IT department or work for a software development company. An innocent mistake (like releasing an obscure but potentially damaging bug), leading to blame of the individual, leading to more control of releases (processes and procedures) and a “don’t fuck up” culture.

Of course we don’t want the dogs to crap on the rug. Blaming me for this incident, imposing more control (the sign on the wall) and reducing trust in me (my wife checking I’ve put the dogs out) *may* solve the problem. But in reality there is still a chance that it will happen again. People make mistakes. People repeat mistakes.

Infant-2-and-doggy-doorProblem dissolution

By employing a systems thinking approach to this scenario, we can look to *dissolve* the problem. That is, the problem of “the dogs might crap on the rug during the night” is actually removed rather than its probability reduced.

If I install a doggy door, the dogs can get in and out whenever they need to, so they will never be stuck inside when they need to crap. My wife will never have to worry about me messing up again, and blaming me for my stupidity. We won’t need signs up on the wall, serving as a constant reminder to myself and my family that I messed up.

Sometimes buggy software will be released, no matter how high the quality of our code or the stringency of our release procedures. Because people miss things. People make mistakes. People repeat mistakes.

If we make releasing really quick and easy, we can update our tests and release bug fixes before there is any time for blame and increased control to become necessary.

Do you look to merely solve problems in your organisation, or to dissolve them?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Scrum Basics Part 1 – Activities, not Roles

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

divider

This is the first in a series of small posts aimed at new Scrum teams, organisations newly adopting Scrum and people who have been doing Scrum for a while but are struggling to get the results they crave.

Activities

This post is based on a response I gave to a question in a LinkedIn forum:

“The BA role is an integral and implicit part of Product Owner Role in Scrum. What is your take on this?”

This is a very common question among those new to Scrum and Agile. It’s an interesting one and a classic example of why, in my opinion, companies the world over are failing to do well with Scrum.

To begin to answer it, I will let the Scrum Guide do the talking:

  • The Scrum Team consists of a Product Owner, the Development Team, and a Scrum Master.
  • Scrum Teams are self-organizing and cross-functional.
  • The Product Owner is the sole person responsible for managing the Product Backlog.
  • The Product Owner is one person, not a committee.
  • Development Teams are cross-functional, with all of the skills as a team necessary to create a product Increment;
  • Scrum recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there are no exceptions to this rule;
  • Scrum recognizes no sub-teams in the Development Team, regardless of particular domains that need to be addressed like testing or business analysis; there are no exceptions to this rule; and,
  • Individual Development Team members may have specialized skills and areas of focus, but accountability belongs to the Development Team as a whole.

Departmental silos are entrenched in the way companies typically do things. They are part of the system. The culture. As a result, the urge to maintain departmental silos is strong.

I would suggest this is a key reason why Scrum implementations might (and do) fail.

Straight off the bat, certain elements of the Scrum Guide are typically ignored or deliberately rejected. These elements may or may not turn out to be key in your organisation, but the fact is they are in there for very good reason. It is a mistake to assume from the outset that your context requires removal of these elements.

Scrum is not asking companies to remove departmental silos, but it is asking that these silos are ignored such that they do not exist within the Scrum team. In the Scrum team, everyone building the product increment is part of the Development team. There are only 2 other people in the team – the Product Owner and the Scrum Master. That’s it. That’s the Scrum team model. Period.

There is absolutely no prescription as to who should be in the Development Team, only that the team has all of the skills and capabilities required within it to build a product increment, and that the team jointly owns all of the work, activities and decisions. In order for effective teamwork to flourish, Scrum says that roles should be left at the door.

That does not mean that our individual expertise and experience is left at the door along with our job titles. On the contrary, the best self-organising teams decide how best to leverage the expertise within the team.

divider

If the question asked in the LinkedIn discussion was actually:

“Are the typical activities undertaken as a BA part of the Product Owner’s responsibilities in Scrum?”

then my answer would be that these, and any other activities involved in building and managing a product’s development lifecycle end-to-end, are shared between the Scrum Master, Product Owner and Development Team. This is made very clear in the Scrum Guide.

To that end, there is no “BA role” in Scrum, much like there is no “tester”, “QA” or “UX designer” role. Roles are part of traditional siloed thinking. Scrum (and Agile) focus (deliberately and alternatively) on cross-functional teams. Roles are a function of the particular company, not the activities that need to be done as part of product development.

To get the best results from Scrum it is a good idea to stop thinking about what roles you need in the team, and instead think about what activities are required to build your product. A good self-organising Scrum team will share these activities regardless of whether they have a specialist, designated BA or not.

Personally I like to encourage “collaborative analysis”, where all of the “what” and “why” for every decision, every story, is talked about by the whole Scrum team. Then the “how” is handled by the Development Team.

The popular model of having BAs “writing stories” and handing them off to the developers in the team is highly ineffective, not the hallmarks of a collaborative, self-organising team and about as far from both Scrum and Agile as you can get.

divider

To build products effectively with Scrum, it’s a good idea to map out all of the activities that are required to build the product. Forget current roles and responsibilities for now. Once you’ve listed the activities, gather a team that can execute those activities in their entirety. If your company has BAs and you need one of them for your Scrum team then by all means have them in the team.

But please remember to ask yourself this key question:

“Is the BA part of the Development Team or are they the Product Owner?”

In Scrum, they can’t be both. And they can’t be neither.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Stop using time in relative size estimates!

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

divider

PlanningPoker617x410

The SAFe approach to normalised story points makes a classic mistake that everyone seems to make with story points. It is not “relative sizing” to compare stories to a reference story that has been estimated in time (in this case “about a day”).

As soon as you introduce time as a basis for your reference story, and use e.g. story points with Fibonacci sequence, all of the comparisons you make are based on time, i.e. a 2 point story pertains to 2 days, 5 points to 5 days, etc.

Even if you are not doing this consciously you will do it unconsciously. So all you have done is estimated “how long” the stories will take to deliver. This is not relative sizing!

The whole point of using relative sizing instead of time-based estimation is that humans are better at comparing the size of things than we are about making absolute judgement of size, e.g. we’re good at being right that building A is bigger than building B, but we’re not so good at being right that building A is about 200 metres high and building B is 150 metres.

Unfortunately when it comes to tasks that we perform, our natural tendency is to use absolute terms because the “size” of a task essentially equates in our brains to “how long”. The fact that story points are numbers doesn’t help with this. Where story points completely lose their value is when we start deliberately equating a point value with a length of time.

True relative sizing of a backlog is to pick a low value story (one that you are unlikely to implement for some time) and do not estimate it at all. What you now do is compare other stories to that story, i.e. I think story C will take longer than story B, story D will take longer than story C, story D is about the same size as story C, etc.  At no point do we actually predict how long something will take. We are simply saying which stories will take longer than others, by our estimation.

When a new story emerges you then do the same thing – decide if it will take longer than the reference story (which, because you have not yet implemented it, you will not be influenced by the actual time it took), less time or about the same.

You can now measure progress against the total backlog as you deliver the stories.

One thing I do agree with in the SAFe approach is that you should not do any re-calibration/estimation. As soon as you start re-estimating stories based on how long things are actually taking you are being influenced by time. This can not only throw off the relative calibration of the backlog but also ignores the inherent variability of software increments; i.e. there will be outliers within size groups that take significantly longer (or shorter) than the modal average.

P.S. If you’ve read my other #NoEstimates stuff on this blog you will know I do not advocate the use of story point estimations at all, especially due to the way they are typically misused and abused. However, there may be some potential value in doing relative size estimates (e.g. T-shirt sizes), if done right, for one or more teams working from the same initial product backlog in order to give some indication of the overall viability of the initiative and to provoke discussion within the team(s) about the value and possible approaches for undertaking individual pieces of work, aka “what shall we do next”.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

#NoEstimates – An Alternative Means of Risk Management

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

001b1-Risk-ManagementIntroduction

A continuing theme of counter-arguments posed at the #NoEstimates ideas is that development cost estimates are required in order both to manage risk and to derive value.

This blog post intends to give further insights into how risk can be effectively managed, and how we might determine the value of our initiatives, without the need for making up front and deterministic development cost estimates.

Risk

“Risk is the probability of an unfavorable impact to the project” - Glen Alleman (@galleman).

From the risk angle, the argument goes along the lines that the built-in “risk management” in Agile approaches is not aligned with conventional definitions of risk management in software development.

I’ll go along with this. Agile (and #NoEstimates) does not take the conventional approach to software risk management, which sees project success as “on time, on budget” and thus requires an up front estimate of total scope, cost and duration.

Agile/#NoEstimates offers an alternative way to manage risk on projects (and, no, I’m not taking about Agile Estimation, the spin-off brand of traditional estimation promoted by Mike Cohn). I’ll explain more about this later.

428Bucks-ValuableInsurance-blogSpanValue

The argument regarding value is that estimated cost is required to determine value, given that value is related both to the timing of when things are released and how much it costs to develop the things that will (potentially) generate value. That the worth of something to someone can only be evaluated if we know how much that things costs.

Again I agree to an extent, but there are two key sticking points for me here. One is that we only know how much software development costs after the fact. People say “we need to estimate because we need to know the cost”. Estimating, however accurately we think it is being done, does not allow us to know the cost.

Before the event we can only estimate what will be done and how much it will cost. In addition, the further out we are estimating cost and value, the riskier (and potentially costlier) our estimates become.

By estimating, rather than fixing, cost we have no greater insight into the value, which is also estimated. Essentially we are increasing our risk by estimating both cost and value rather than just value, which is what #NoEstimates promotes. More on this later.

The other sticking point is that value is often highly subjective and personal. I know how valuable a particular brand new Ferrari is, partly because I know how much it costs. That said, if you gave me two different Ferraris to test drive and didn’t tell me how much they cost, I would tell you which one I prefer. Which one was more valuable to me. This has nothing to do with the cost. The one I prefer might be significantly cheaper, but its value to me is higher because it’s more fun to drive and I prefer the look of it.

The same applies with software. There is so much to consider when we try and measure value. Aside from the empirical measure of monetary returns, we have to consider the needs of the customers, the stakeholders and our corporate strategy (to name but a few), not to mention the fact that all of these things change over time.

Agile is about delivering value early, not trying to predict how to maximise value over a given timeframe or a product’s lifecycle. It is the early delivery of value that allows us to tune and adjust our course for maximum longer term benefit.

This is why it is an alternative, and completely viable, approach and should be considered as such.

Agile Risk Management

The key aspects of Agile that help us manage risk effectively are:

  • Iteration
  • Continuous selection of highest value work (i.e. making decisions)
  • Fixed, cross-functional teams with 100% focus on current project
  • Early and frequent delivery of end-to-end working software increments and
  • Empirical measures of progress toward goals

Waterfall-computer-wallpaperWith Waterfall projects, the need for conventional risk management is clear. We have no way of measuring progress from day one in terms of working software because we are carrying out requirements analysis, specification and design phases before we write a line of code. People are often working on multiple projects and so we must allocate a percentage of their time to the project at hand.

The only way to measure percentage progress toward project completion is to have a breakdown of the SDLC phases and tasks within each, estimated in days/weeks, and tick them off as we go along. If we don’t complete all the necessary tasks for a given phase in the estimated timeframes, we are “off track” and we need to take corrective action.

With a phased delivery approach, the only way to manage risk is to have an estimate of the total scope, cost and duration of the project.

But if we are working in an Agile way, we are not taking a phased approach to project delivery. We are delivering full end-to-end working solutions in an iterative manner, early and frequently. We are working in fixed, cross-functional teams so teams costs are known and consistent.

This approach allows us to manage risk and measure progress toward project completion (meeting of stakeholder goals within a given budget) from the get-go.

progress_arrow_up1Progress

If we are truly iterating by delivering vertical slices through the system, after our first iteration we will be able to measure progress toward the project goals. We will have delivered a working, albeit perhaps low quality, solution to the problem. We may even have actually met the project goals.

Either way, we can inspect what we have done and decide if we are on the right track. If we are, we can iterate over our solution, improving quality in the desired areas and incrementing new features. If we are not, or we see a better way of solving the problem, we can throw away what we’ve done and start again. We may even decide to scale up our efforts and add more teams, if there is emergent estimated value in doing so.

Given in Agile we are delivering end-to-end working software from the get-go, we are not burdened with the problems we faced in our Waterfall projects for measuring progress. We have the ability to empirically measure progress because we are delivering “done” functionality, as opposed to hitting pre-determined “milestones” which are not based on what we have actually delivered in terms of a working product.

green-traffic-lightIn Waterfall, so long as we are hitting our milestones then the project status is “green”. For software product development projects, this means that we are deferring our risk management until we actually start writing code. We don’t know that the scope of what we want to build is achievable, and we can’t reduce scope until we actually realise it’s too much (well into the development phase, deep into the project).

In Agile we can manage scope right from the beginning, because we are continually focusing on building the most valuable thin, vertical slices which represent iterations over an end-to-end solution to the problem. We can empirically measure how much we got done and how much is left to do. We can regularly take proactive decisions to cut scope or switch to an alternative approach to improve our chances of delivering a successful outcome. What should we do next for maximum value and maximum impact in meeting our goals? What should we not do? What is the simplest approach for our next iteration?

This is risk management.

These kinds of conversations enable us to focus on doing the simplest thing, for maximum impact, given the budget that we have available. To not wait 9 months to deliver a solution but to deliver a solution in 1 month, then make it better.

mona_jonah-640x209Most “Agile” projects are not managing risk

If we decide up front in a project inception on the requirements (product backlog) and solution we will be sticking to, and estimate it will take, say, 9 months, all we will do is incrementally build the solution, usually in horizontal slices, components or modules.

After each “iteration” we will not have a holistic view of what we’re building.

This is a very common approach by “Agile” teams. In this situation we are deferring the management of risk until we actually have a system that can meet (some of) the needs of the project stakeholders, usually late in the game when the deadline is getting close.

This is not risk management. If we work in this way we cannot work with #NoEstimates.

How do we estimate value without estimating development cost?

OK, so assuming we have the capability and will to deliver vertical slices through a solution early and rapidly, and we have a fixed cross-functional team, 100% committed to the project at hand, we can focus on the potential value of the ideas we want to build while controlling cost using small “drips”.

When we use ROI to decide whether a project is worth pursuing, or which of 2 or more potentially valuable projects we should choose given limited people and resources, we base the “investment” measure on the estimated cost, of which the development costs are part, and the “return” is the value we expect to generate, measured on the same scale as the investment (usually money).

There is a flaw with this approach.

6 months, 2 years, it’s all the same!

Let’s say we estimate a project will take 6 months of development time, costing $500k. We expect that when the product is complete it will generate $2m in revenue. The timing of when that revenue gets generated is key. Will we get anything at all before the product is built in its entirety? Will there be a few months of marketing required after all the features are done before we will start seeing the cash rolling in?

The implication of the timing of value generation is that the actual ROI of what we’re building in a 6-month project might still be negative after 6 months of development time, even if we get everything done that we originally wanted done (and estimated).

Now compare that to, say, a project with an estimated duration of 2 years. After 6 months, the ROI of the two projects will be identical. Our net loss in both cases is $500k, so our ROI is -100%; we have spent half a million bucks with nothing (yet) to show for it.

So, given the erratic, inconsistent and numerous ways we can measure value in software, is the traditional ROI approach an ideal decision making model in this domain?

mortazavi20110206103527187Agile is about early delivery of value, not trying to predict maximum value

The upshot of this is that the less risky approach to generating a positive “ROI” is to work on options that will potentially generate value early, i.e. with relatively small and simple effort. Put simply, if we prioritise initiatives by virtue of which ones we expect to generate value early rather than how much value they will generate over the product’s lifecycle then we do not need to batch these initiatives up into “projects” and estimate how long the project will take.

This can easily be reverse engineered. If our starting point is a “project”, with a list of requirements, the best thing we can do to manage risk (keep our decisions within the bounds of the near, more certain, future) and ensure we deliver value early is to pick the most valuable requirement/problem to solve and come up with a simple, creative approach to fulfilling that requirement in a very short timeframe.

SNG211-Detail-PropertyWhat’s next? One at a time…

The team can go away for, say, 1 month, after which time we holistically assess where we’re at in terms of fulfilling that requirement. What have we learned? Is this requirement still the most valuable one to work on (ignoring sunk costs)? Are we better off ditching what we’ve done and investing in attacking another requirement?

Our measure of what is valuable must reset after each iteration. It’s irrelevant how much we’ve already spent (sunk cost fallacy).

We need to constantly concern ourselves with what is the most valuable thing to do next. This is Agile. This is #NoEstimates.

And this is risk management. Yes, it’s an approach that requires a different way of thinking about how we choose what work to invest in, how much to invest and the decisions we make along the way. But it is risk management nonetheless.

RealEstateBlog360-BigMoneyBut we can’t do this when $200m is at stake!

The #NoEstimates debate has hit a point where the main remaining arguments are around its application in big money projects. Most of the original dissenters – who have now spent time reading more about the ideas put forward by myself and the other #NoEstimates crew – are now in agreement with us that, at least for small scale projects, we can get away with not doing “micro-estimates”, and indeed it may be preferable to work this way.

But when it comes to “macro-estimates” – i.e. how much of the customer’s money are we going to spend – it is argued that a #NoEstimates approach is not viable. That when “you are spending someone else’s money” you need a plan (estimated schedule) to ensure you deliver what is required for the money, with some deterministic level of confidence.

The irony of this argument is that when the big number guys come out swinging with their big numbers, these numbers are estimates! When we call a project that we haven’t yet completed, or even started, a “$200m project”, what we are actually saying is “our customer has a $200m budget and we have to deliver what they want for their money”. In other words, the decision has been made to go ahead, and the budget is $200m. There is no go/no-go decision to be made – it’s already been decided that the project is going ahead, and they want a result for $200m.

For me, with such large sums and timeframes at play, there is all the more reason to manage risk by drip funding small amounts and iterating over a solution in the way I’ve described. Scaling up where required. Tuning and adjusting.

The alternative is to manage risk by using probabilistic estimation techniques based on past projects such as Monte Carlo simulations to derive a total estimated cost with a confidence interval, and then constantly adjust these calculations as the project progresses. But I maintain that the Agile way, where we start from a budget or fixed deadline and then actively build and manage scope along the way, is preferable because it harnesses the creativity of designing and building great software and allows us to welcome and embrace change every step of the way.

Quote-33Create the future rather than predict it

Instead of trying to nail down a plan and predict outcomes, we are forging our own future based on current market conditions at any given time, and the way we feel about what we’ve built so far. We are controlling our costs by working with fixed teams in short timeboxes, and we are constantly assessing the value of what we’re building.

If we work this way we do not need to estimate things up front. Empirical data is being generated as we go along, and we can look at the market with fresh eyes after each iteration. We can see what we’re getting done and what we’re not. We can change our mind on whether we care that we didn’t get the things done that we wanted to get done. We can see which of our assumptions were true and which were false. We can steer our ship in whichever direction we need to avoid the iceberg ahead, while remaining focused on the destination.

This is at the heart of #NoEstimates from my point of view. It is possible to work this way. It is not easy to get to a position where you are able to, but if you can get to that place it is, as Ron Jeffries describes it, “the best known way to work”.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

The Ethical Man Month

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

ethicsSystems Thinking tells us that we are products of the system in which we operate. That we will perform based upon the ways we are being measured.

Personally I am astutely aware if the way I am being measured is also a target. I know the measure is not an effective way of helping me contribute to reaching the organisation’s goals.

But the thing I struggle to understand is that if we are gaming the system, and know we are doing so, at what point do our ethics kick in? What is our tipping point?

I once worked with a team that was battling against technical debt. Regression bugs were appearing with increasing frequency due to a lack of automated integration test coverage with legacy systems. My team wanted to do the right thing and fix the bugs that they found, despite the fact that it was not them who created the bugs, but were concerned that they were falling behind with their own work.

They assigned no blame to the unfortunate soul who checked in the code that caused the regression. In fact, they didn’t even get to find out who the culprit was until after time was already spend determining the cause of the bug. There was much complexity in the interactions between components and a gaping lack of integration tests across them. The team just wanted to fix the problem, add some appropriate tests to prevent the problem from happening again, and move on.

The problem for me was that this was impacting on our project schedule. The team were supposed to be working on stories for my project but instead were taking time working on bugs created by other teams. I was being measured on the delivery of the agreed scope in the agreed timeframe, not on our software delivery effectiveness across the portfolio. Surely it was in my best interest to ask the team not to work on other people’s bugs? My delivery schedule was being jeopardised. I would be held accountable for this. I would be asked tough questions. Why didn’t I deliver everything I said I would?

But here’s the thing. Despite how I am measured, I am passionate about creating good outcomes for the stakeholders, the customer and the company, not my specific project. I do not see the work to be done as a set of easily definable story cards. In this and other similar situations I wanted my team, and other teams, to spend time reducing technical debt across the board, improving code quality, collaborating with each other to find ways of making everyone’s lives easier, etc.

I can choose to let the system define me. To be a product of the system. Or I can choose to question things. To think holistically about how we can improve.

The system will reject this. But at least I can go to sleep at night knowing that I am doing what I believe is right.

How much do your ethics influence the decisions you make or don’t make in the workplace?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Chris Chapman “interviews” me about #NoEstimates :)

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone
interview1.       You’ve mentioned on Twitter that in your opinion, #NoEstimates = Agile + Real Options. For the curious newbie, what does this mean?

The approach I talk about is very much underpinned in Agile principles. In fact it’s what I believe Agile was intended to be at its core (although I’ve had some disagreement from the likes of Ron Jeffries and Alistair Cockburn on this point).

To summarise #NoEstimates from my point of view:

Constraints breed creativity

  • Use real constraints to drive decisions, e.g. “this is how much we want to spend” or “we need something by June in time for Wimbledon”
  • Arbitrary constraints (such as deadlines based on zero to low knowledge “estimates”) cause dysfunctional and ineffective behaviour
  • Create mini-constraints (i.e. drip funded iterations) to promote a creative approach to what we are going to build to address the problem at hand

Build awesome teams

  • Create fixed, capable teams so we know how much our time costs
  • Scale up team capacity if enough positive value has emerged (by adding teams, not people to teams)
  • Empower our teams to be bold and free in making solution choices, with focus on “building the right thing” and “delighting customers and stakeholders”

Keep our options open

  • Cover multiple, potentially valuable options with small experiments rather than committing to one option per team for long periods
  • Reassess options frequently to ensure initiative is still valuable (ignore sunken cost) and is more valuable than other options for which we could divert our team capacity
  • Anything we haven’t yet built (e.g. our product backlog) is only an option – we shouldn’t assume we’ll build it and shouldn’t worry how “big” it is unless we actually want to do it now, or very soon

Put the “iterate” back into “iterations”!

  • Truly iterate over the solution (holistic determination of where to take the product next) rather than just incrementing pre-determined backlog items
  • Deliver early and frequently, with very small (even daily) feedback loops – this makes us predictable

Create collaborative working agreements

  • Create flexible, collaborative working agreements with our customers which allow us to truly embrace change and deliver to customers’ present needs rather than their needs when we started
  • Allow customer to cut the cord early if they are happy with what they have (or not happy with progress)
  • Start from a position of trust rather than paranoia (which traditional contracts are based on)

Favour empiricism over guesswork

  • Keep work items small and simple, and limit WIP to create a predictable system
  • Slice features into simple, unambiguous stories using a heuristic rather than estimation rituals
  • Price work per feature if appropriate, using empirical average cost of features to guide price rather than a deterministic estimate of individual features
  • Use cycle time and throughput to make near-term prioritisation calls, not to determine release dates (there are no big releases in this approach anyway)

Shift focus away from estimation

  • Create a culture of honesty by removing negative estimation culture (i.e. get rid of story points and the notion of estimates as promises or deadlines)
  • Make work and project success about creative delivery of value (i.e. “what shall we do next?”) rather than “on time, on budget”, schedules, deadlines, etc.
2.       Describe what you mean by a “slicing heuristic”

Essentially it’s a policy for how we break up our work. For example, “A user story must have only one acceptance test”. Rather than breaking features into stories and then estimating the stories, we can use the heuristic, measure our cycle times and then inspect and adapt the heuristic if required.

I’ve found the “1 acceptance test” heuristic to be consistently effective over different domains for creating an average story cycle time of 3 days or less.

3.       How does your approach differ from that of Woody Zuill? Or, are there more similarities than differences?

I can’t speak for Woody but I feel that Woody’s approach is simpler than mine. He believes that if you follow the Agile Manifesto properly then the need for estimates dissipates.

I agree with him in principle but see systemic issues, particularly in analytic/mechanistic organisations, that I feel need to be addressed in order for #NoEstimates to strike a chord with more traditional managers and executives. At its core though, #NoEstimates is about exploring various approaches to delivering software without the use of estimates, and the commonality between our approaches seems to be the continuous delivery of small increments of high quality, valuable software.

4.       Do you think any team can work without estimates? What’s the minimum “barrier to entry” ?

Any team (with the right coaching and knowledge) can embrace the slicing of work, limiting of WIP and measurement of throughput/cycle times, even if they are being asked to estimate with story points or time. #NoEstimates is not about refusing to estimate.

If you’re talking more about the overall approach from the portfolio level down, I’d say there is a minimum barrier to entry:

  • Fixed team (cost)
  • Allowance of variable (emerging) requirements/scope
  • Small batches of stories/features
  • Slicing heuristic to create roughly consistent actual/measured work unit size (“a few days”)
  • Incremental & iterative delivery capability of shippable code
  • Mini constraints such as weekly demo/review with customer (small, early and frequent releases)

This looks very much like any typical “Agile” team to me :)

5.       What advantages does working without estimates provide your team over, say, a team that is using longer cadences, eg. Scrum?

My approach is entirely compatible with Scrum. In some ways I think that it’s what Scrum was intended to be (or at least, in my opinion, should be).

If a Scrum team is working in 2-week Sprints, truly iterating, delivering working software every Sprint, inspecting and adapting the product etc. then this looks very much like the approach I am advocating.

6.       A common criticism of #NoEstimates is that when you slice off functionality to deliver (the “heuristic” approach) you are, in effect, estimating. Is this a correct interpretation? Why/why not?

Well arguably if you create a heuristic for creating “small” work then I can understand why it is interpreted that way. However, I don’t believe it is estimating. The point is to create simple and unambiguous story cards. The “smallness” is a by-product of doing this.

If we don’t get the smallness we’re looking for (after measuring the result) then we inspect and adapt the heuristic. At no point do we actually look at a card and say “I estimate that this is small”. We trust in the heuristic.

7.       You’ve been a really vocal advocate for working without estimates, standing up to some tough questions from established agile practitioners. Why do you think this topic has so many people so roused?

Because the way software projects are typically governed is largely driven by estimates, so it touches almost everyone in the industry. It’s an established way of doing things so it is deemed controversial.

8.       What would your advice be to a team considering working without estimates? What should their first steps be?

Don’t simply stop estimating. Try and get better at creating simple, unambiguous slices of functionality. Measure your throughput. Compare story count data with your story point data. Discover for yourselves if a #NoEstimates approach is right for you and a good fit for your organisational culture.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

People Need Estimates

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

blunt_bl_lite_red_lite_umbrella_3People need estimates. So they can predict how much software will cost and how long it will take.

People need umbrellas. So they don’t get wet when it rains.

Although, some people don’t need umbrellas. They have awesome waterproof jackets with hoods. They have solved the problem of “how do I stop getting wet?” with a different solution to the humble umbrella.

People need to know what time the trains are running so they can plan their trip to work. Some people do not need to know this because they take the London Underground, where trains typically arrive every 2 or 3 minutes.

What’s your point, Neil, you might be asking? My point is that when people are debating against the #NoEstimates movement, they always seem to gravitate toward the same two arguments:

  • People need estimates, so we should provide them
  • We cannot simply start building software without having an idea how long it will take or how much it will cost

To the first point, people only need estimates if we determine that the only solution to the problem of wanting to know “how long and how much” is to make a guess. People who have found other solutions to that problem do not need estimates.

I now wonder: just because the people who still need estimates have not discovered any alternative solutions, does that mean they need estimates or that they think they needthem? Or simply prefer to use them over other solutions?

People do not need umbrellas. They need a way to stay dry on a rainy day.

To the second point, I categorically want to put an end to the myth that #NoEstimates equates to #NoPrice or #NoDate. If you read my previous blog posts on the subject or read my tweets you will hopefully understand that my point is the absolute opposite. We DO need a price and/or a date. The only difference is how we arrive at those things.

With estimation, you guess one or both of them (and, in doing so, have a stab at scope too – otherwise what are you estimating?)

With #NoEstimates you set the price and/or date, either through experience and choice (for e.g. setting price/date for the kind of work you do regularly, with a fixed team and cost) or through a real budgetary or time constraint (e.g. “I’ve only got $100k, what can we build for that?” or “The Australian Open starts in 3 months so the Aus Open app needs to be ready to go live the day before”.)

You then incrementally and iteratively deliver, setting mini-constraints within the wider constraint that breed creativity, innovation and predictability of delivery, and have a flexible working and payment arrangement with the customer.

People need certainty about what they will get and how much they have to spend. Unfortunately there is no certainty in software design and development. However, I would argue that #NoEstimates gives greater certainty than estimating does.

When estimating a date or cost you are creating uncertainty around those things, because you are guessing. You are saying “we’ll deliver somewhere between here and here”. However, if your delivery date and/or cost is set by a real constraint, as advocated by the #NoEstimates approach, you have created certainty around those things.

Yes, you may decide to shift the date/cost as you get closer to the initial figures, or once the customer decides they are happy with what they have. You have been delivering frequently and learning about what you are building. You have been creating data, such as throughput and cycle times, and using heuristics and slicing to reduce work increment size, so informed decisions can be made along the way. But you will only go beyond those initial figures if the emergent value of what has been built, and other data you have gathered, suggests that you should. Scope remains uncertain whether you estimate or not.

People still need 500-page business requirement documents. People still need separate test teams and development teams. But there are alternative solutions which may render these needs unnecessary. The alternatives to estimation are real, both at the project and the portfolio level, and are being used by many people across the globe in varying sized businesses.

All I ask is that we consider those alternatives and do not stop searching due to need.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

#NoEstimates Part 3 – The Palm Off

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Talk_to_the_hand

It is no secret to my Twitter followers, and perhaps beyond the Twitter-sphere, that I am on a crusade of sorts to get people considering other ways besides estimating when it comes to costing software development projects and tasks. Such a view remains controversial, even among Agile practitioners. People argue that there is no alternative; customers want estimates, so we must provide. Stakeholders need to know when things will get done. Estimation is seemingly one of the few remaining immutable practices hanging over from the Waterfall era.

One of the common criticisms of my view is that it is unduly dismissive. When asked by our boss or a customer for an estimate, we can’t simply palm them off and say “I don’t estimate! Talk to the hand, sir!”

Of course this is true. But I should point out that I actually see nothing wrong with being asked for an estimate of how long something will take. What I object to is being asked to carry out (or ask my team to carry out) estimation rituals whose results will then be used for making important business decisions.

We cannot palm people off, but what we can do is offer alternative, empirical approaches to traditional and “Agile” forms of estimating, explain exactly how we will provide the required information and why such approaches offer advantages over guessing “how long” or “how big”.

First off, I would suggest that there are many problems with the “how long/big” approach, the biggest of which is that such an estimate does not take into account the:

  • Inherent unpredictability of building software
  • Current work in progress (i.e. the team/dev may not be able to start the work “now”, or even for a few days, weeks or longer)
  • Capacity to do the work (i.e. the team/dev may make the estimate based on certain assumptions of team size which turn out to be false, or a colleague being there who ends up not being), nor
  • Any upcoming changes in priorities (i.e. something may jump above the piece of work in priority).

From a task point of view, what is estimated as a “10 minute job” may end up taking a day or longer due to one or more of the above. I’m sure you have seen this situation many times over. From a project point of view, this situation is magnified and can be hugely costly, even catastrophically so. 3 month projects become 6 months. 1 year projects become 3 years.

In a situation where there are small tasks flowing through from the customer to the development team that are unpredictable in their timing (e.g. BAU work queues, feature development, etc.), a far better, probabilistic approach to get some semblance of predictability is to do the following:

  • Measure actual lead times of every piece of work and plot them in a Lead Time Distribution graph
  • Measure throughput (you can start by simply counting the number of cards in the “done” column at the end of every week)
  • Use a fixed WIP limit on cards in progress (start, if you like, with the natural limit of team size)
  • You can now use Little’s Law to calculate average lead time for a card at position n in the queue, i.e. (WIP + n) / throughput:
    • e.g. Number of cards done in 1 week = 20, therefore throughput = 4 cards/day
    • Team size = 2, therefore WIP = 2
    • Lead time = (2+1)/4 = 0.75 days (i.e. on average it will take three quarters of a day for a card at the top of the queue to be delivered)

With the same formula you can predict where a card 2nd, 3rd or xth in the queue will get done, which is very helpful for guiding your prioritisation:

e.g. Using the same example above, a card 2nd in the queue will likely be done in 4/4 = 1 day, while a card 6th in the queue will likely be done in 8/4 = 2 days

Bear in mind the only way this formula can provide useful numbers is by having a WIP limit that is fixed (as far as possible). There will of course be variability in how long each card takes, but the law of large numbers will even this out to an acceptable average and it’s certainly far more scientific than asking people to estimate each card.

Note that if you use Scrum, and thus the team breaks down features into small tasks just-in-time at the beginning of every Sprint, you can use the same principles as above to determine when a new feature might be delivered (Scrum has a WIP limit over the Sprint length of the number of tasks in the Sprint Backlog, throughput is the number of “done” stories/tasks divided by the Sprint length, etc.).

Over time you can achieve a higher level of confidence with the predictions as you start to identify and split out different work types, determine probability of delivery times using your Lead Time Distribution graph, etc.

What about “how long will this project take?” !! Warning !! You can scale this approach up to the portfolio level. But… do bear in mind that building an entire software product rarely has a finite end point or a repeatable result because it is not possible (nor desirable) to define all of the scope required to deliver a delightful, valuable outcome. Use such predictions with extreme caution. There is no substitute in software product development for creating certainty around costs and delivery times via fixed agile teams delivering working software early and often, short feedback loops with the customer, etc.

So, next time you’re asked “how long” or “how big” about a software project or task, don’t palm off your boss or your customer with simply “I don’t estimate!”. Perhaps you might consider answering: “I don’t estimate! But… here is how we can save ourselves the cost of estimation meetings and make empirical predictions going forward to answer these questions with more confidence.”

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

#NoEstimates Part 2 – Contract Negotiation and the Old Banger

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

1.1227187680.more-old-bangers-in-the-streetsxThis is the second in a series of blogs about why I believe we should not be estimating software projects. The first post talked about estimating at the team level, whereas here I talk about the contractual level and how to arrive at more Agile, iterative working arrangements.

Agile team, same old contract

Traditional software contracts, particularly with external parties, are based on:

  • Establishment of scope
  • Estimated time to deliver that scope
  • A price derived from that time + associated costs + profit margin

Many, if not most, of today’s software contracts are based on similar premises, even in supposedly “Agile” projects. In order to mitigate the risk of their deliverable running late and bumping up the cost, many customers demand fixed price contracts. Others demand that the supplier contractually fixes the delivery date to ensure meeting some obligation around the date and shy away from time-and-material engagements. Suppliers often like the fixed time approach as well because it creates predictability around cost. Fixed price contracts provide certainty around the project’s ROI, assuming it can be delivered at a low enough cost, and customers like to know how much they are spending.

There is nothing inherently wrong with any of these approaches or the reasons behind doing them. The problem lies in how we arrive at delivery dates and prices. In order for a contractual engagement between a supplier and customer to be worthwhile to the supplier it must deliver a positive return on investment. Usually this means that the money received from the customer for the supply of the product or service must exceed the money spent by the supplier providing it. So how do we balance that equation? Customers want certainty they will get what they want in the agreed timeframe and/or for the agreed price, while suppliers want to make sure they make a profit on the engagement. Seems simple enough. But what is missing from these scenarios? Even if both parties accept the well-understood iron triangle of time/cost, scope and quality, and that at least one of the three must be variable, is this enough on which to base a low risk and mutually valuable contract? I believe the answer is no, and not just because scope needs to be movable.

Quality is variable, not fixed

What?! Sounds controversial but I believe it to be true. In addition to the need for scope being variable, Agile folk also tend to talk about quality being fixed and uncompromising, meaning that time and cost can also be variable to deliver the best possible outcomes. Aside from the fact that leaving the cost and/or completion time of a project open is generally deemed an unacceptable way to conduct business, and likely why many businesses shy away from “Agile” contracts or working arrangements, I actually think it is un-Agile to fix quality. By this I’m not talking about code quality (the debate about what are bugs and acceptable levels of bugs in minimum viable and evolving products is for another blog post, another day). I mean quality in terms of what the customer defines as quality, and for me they are the only ones qualified to do so. IMO quality is an ever-changing variable in a project, just like scope. The difference is that the customer defines quality, either explicitly or implicitly, consciously or unconsciously. Scope, however, is defined by the supplier. Personally I think of quality in the context of products and services as:

“A subjective meeting of a need or requirement to the satisfaction or delight of the customer.”

If it is fair to say that what might delight a particular customer one day might not do so in 6 months time, and that what delights that customer right now may horrify another customer right now, I believe it is also fair to posit that quality ought not be fixed. I believe quality is what we should try and achieve, and it is what the customers want, but cannot fix what it means to achieve it. We will fail if we concentrate on time/cost and/or scope without making sure we are adjusting our delivery behaviour to suit the customer’s perception of quality. When we talk about projects being either “on track” or “off track” we always base it on our own interpretation of whether we are meeting the customer requirements. I believe theonly way we can know if we are on or off track is by asking the customer. They are the ones who know what they want. And this will most likely change. And this is fine! Great, in fact! That’s why we’re being Agile, and why they signed an Agile contract, right?

Don’t deliver the requirements, deliver what the customer wants

Delivering all the scope the customer wants may not actually delight them. It may even annoy them. Or cost them big time. They’ve hired you because you’re an awesome web design company with a great track record. They love your previous creative, innovative designs. And now you have done exactly what your customer has told you to do and it looks crap because your customer does not have a flair for web design. They are the customer, you are the supplier. You are the expert in what you do. You should be telling the customer the scope that will meet their requirement, not the other way round. And they should be telling you whether you are meeting their requirements or not. I believe you can never be “on track” in a truly Agile project, at least in a Gantt chart or velocity-based-Agile-release-plan sense, because the entire fabric of what you are building can change at any moment. If the contractual arrangement is done right then change is absolutely fine, to be expected and welcomed.

Agile contracts – the reality

Breach-of-ContractSo what really is an Agile contract?

Fixed price contracts are fine. Fixed time contracts are fine. But here are the caveats:

  • Do not fix time based on an estimate of cost because that inherently means you are agreeing to up-front scope detail that will likely bite you on the arse later and restrict the customer’s ability to request changes (and yours to welcome them) for their competitive advantage
  • If the customer does not fully understand and embrace the inherent unpredictable, creative and innovative nature of quality software solutions then work with them at your peril
  • If you don’t want to turn away work so you try and agree scope with the customer because “they insist”, and then base dates and times on estimates, do not pretend this is an Agile contract and make sure all parties understand the implications of this
  • Know your costs by having a fixed team and determine a “final” delivery date, or allow the customer to determine it
  • If the delivery date is acceptable to both supplier and customer then you now have a certain delivery date, no guesswork required; if the customer wants delivery sooner, reduce the price AND the expectation of quality
  • When you purchase something more cheaply outside of software, e.g. a cheap old banger of a car, you can assume you will likely receive a lower level of quality – why is software any different?
  • Negotiate a flexible, iterative, drip-funded contract that allows the customer to retreat early (either because they’re already happy with their product or because they’re not happy with the progress; if it’s the latter learn from their feedback, improve and move on)
  • The aim is to delight the customer and make a profit so do not simply do what they ask you to do; they are buying your expertise and guidance for meeting their need, so don’t take this responsibility lightly and think you’re serving the customer simply by “delivering customer requirements”
  • Deliver early and often (duh!); iterate, don’t just increment, and make this part of the working agreement
  • If possible give the customer a sense of the kind of outcome they can expect for varying price and/or delivery times (based on previous work done by your company) and given them options to “upgrade” or “downgrade”

99 seconds

 

 

 

 

 

 

 

 

Remember we’re supposed to “welcome” change?

Yes, don’t try and fix scope. But be prepared to move around on quality also. Allow the customer to accept an earlier version of your product because it does the job and they’re delighted they don’t need to spend any more cash on achieving their desired outcome. Or to love their product so much that they now want to spend more enhancing it. This is variable quality, in my book. Variable scope refers to the cost-side of building software; the amount of work we need to do to reach a specified outcome. Variable quality refers to the value the customer feels they are getting. It’s subjective, dependent on the customer and their particular circumstances. Delivering high value outcomes to the customer may cost more than lower value outcomes or they may not, depending on what the customer feels about the iterative outcomes. That “old banger” that you bought for $1000 may actually provide very high value and quality to you personally. Or it may be housing a classic engine that you didn’t previously know about, giving it emergent value. To someone else it’s a worthless piece of junk.

111610_0208_MicrosoftWo3In the same way software solutions, products and services are entirely subjective in their quality. Some people think Microsoft Word is awesome and feature-packed and they base their entire business operations around it. Some think it is terrible, buggy and doesn’t do anything they want it to do. Let’s not pretend that delivering “quality” software is a predictable outcome any more than fixed scope is.

Variable quality pertains to the wonderful opportunities we ought to have with Agile software development for correcting the course and building the right thing; truly welcoming and embracing change for the customer’s (and our) benefit. This is what Agile contracts should be about IMO. Remove the uncertainty of time and cost by making them certain, and celebrate with your customers or suppliers the uncertainty around exactly what will be built. Why not consider basing your contracts on a mantra more along the lines of:

“We guarantee we will work with our customers’ time and budget constraints to iteratively build and evolve a delightful outcome to an agreed level of expectation?”

And for everyone’s sake, we should not be estimating in order to do it.
Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

#NoEstimates Part 1 – Doing Scrum Without Estimates

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

crossroad-signsIntroduction

This is the first in a series of essays exploring the huge topic of estimation within software development projects.

There are many different contexts in which estimates are given, and I am going to try and cover off as many as I can think of in these blogs, but the pattern of my argument will remain consistent: I believe we ought not make decisions in software projects based on estimates and that there are better alternatives for both the suppliers of software products (financially and ethically) and their customers (internal and external). Many of these alternatives are being used in real companies delivering to real customers with great effect.

Given the vastness of the topic, this post focuses purely on the scenario of one Scrum (or other method of iterative product development) team delivering a software product without estimating. Issues of scaling up or down capacity (adding or removing teams) will be covered in a later post about estimating at the portfolio level.

Will we deliver on time?

100_Satisfaction_GuaranteeThis is a question that often gets asked of a software development team at the beginning and throughout a project, and is a key reason why many believe we need to estimate. However, the ironic twist of seeking predictability by making predictions based on guesses is not lost on most people. We all know, or at least suspect, that we’re plucking numbers out of thin air. That we don’t yet know or understand the solution. Or the domain. We comfort ourselves by calling our guesses “educated” or “quick and dirty”, to justify our using them to make important business decisions.

Building software is by its very nature unpredictable and unrepetitive. While building software we cannot easily break down the work into same-sized, repeatable widgets like we can when manufacturing car parts. Unlike car production, the exact product we are building is unknown until we’ve built it, so how can we break the work down into smaller parts up front? One increment of software is not like the next. Software development is a creative, variable pursuit, and solutions are often revealed as we go along. For this reason, fixing scope in software projects is not really possible. Even if it were, it is becoming widely accepted that attempting to do so is undesirable because such an approach does not allow for (or, at least, does not embrace) emergent design, requirements, change and innovation. If we accept that scope is always variable, we must also accept that the delivery date may end up as a moving goalpost while we scamper to deliver what we think is fixed scope “on time” and “on budget”.

So, if it is true to say the concepts of “on time” and “on budget” are usually based on an estimate of how long it will take (and how much it will cost) to build software to meet a fixed set of requirements, rather than a concrete time or budget constraint, it is likely fair to say that we may take longer to deliver the software than we initially estimated. Yes, we may also be quicker than we thought. Or we may get our estimate just right. But, regardless of the outcome, does it actually matter how “correct” our estimates were? Does the act of estimating our work have any impact at all, positive or negative, on the delivery of great software or its return on investment?

Vision is key

VisionTo build software we need a clear vision and shared purpose of what success looks like. When commencing with a potentially valuable software initiative we need well understood high level goals, not the detail of how we will achieve those goals. In true iterative fashion we can then align our just-in-time decisions about how we will improve the product in the next iteration (i.e. what we will build next, aka top items in the Product Backlog) with these goals. I posit that trying to estimate how long it will take to deliver software to achieve one or more high level goals, and then basing real decisions on this estimate, is a questionable approach. Don’t we want our solution and architecture to emerge? Don’t we we want to welcome and embrace changes for the customer’s competitive advantage as the product evolves and becomes more real to the users? These are key principles in the Agile Manifesto and I believe they lie at the heart of a truly Agile approach to building software.

Remove the unknowns

search_of_certainty1Instead of depending on an accurate estimate for predictability we can take away the unknowns of cost and delivery date by making them… well, known. The Product Owner can fix the delivery date based on a concrete budgetary and/or time constraint (e.g. 3 days before the Australian Open starts for the Australian Open app is a concrete time constraint, and “we have to build something for $30,000″ is a concrete budgetary constraint). Within that constraint the team can then fix incremental delivery dates (e.g. end of every Sprint) to allow focused effort on iterative product evolution (it’s not good to have priorities changing every day on a whim) andprovide the opportunity to deliver early and/or under budget. This approach is also useful where there is no concrete budget or delivery date, although the need for interim release dates diminishes if the team (and organisation) is mature enough to have a continuous delivery model.

Estimating sprint velocity is waste

waste_ReductionRather than fix the solution up front (which is required in order to give a “how long” estimate), or make forecasts every Sprint about how many points or stories will get done, I believe teams ought to commit at the outset to building and delivering the best possible product by a given date and/or for a given amount of money. For me, release planning using, e.g velocity (“how many points can we deliver by the release date?”, or “what is our release date given our remaining scope and velocity”) is contrary to an iterative approach (holistic, evolutionary improvement of the product) and is more in line with a purely incremental approach (delivering a pre-defined Product Backlog feature by feature).

When we estimate and use velocity as a planning tool we are making an assumption of how much can get done in a time period. For that information to be useful and meaningful we need to have an amount of stuff in mind that we want to deliver (i.e. a fully estimated Product Backlog). I don’t think it would be too controversial to suggest that all the time (and therefore $$$) spent on estimating backlog items that do not end up getting delivered is waste (at least in the Lean sense).

But what about all the time and $$$ spent on estimating backlog items that do get delivered? To answer that question, I will ask one more question: “Did the PO ever prioritise one story over another based on it having a lower estimated cost (story point size)?” If the answer to this question is “No” then I conclude that all estimating in this context was waste because no decision was made based on the estimates that were given (instead the PO simply prioritised the highest value stories). If, however, the answer is “Yes” then estimates controlled what I believe should be value-based decisions. Estimating a backlog up-front and then release planning using velocity is a cost-based approach. While costs are obviously important in running a software project and, indeed, a business, if decisions are made purely on cost then some of the great software we use and rely upon today (e.g. much of what is made by Google, Facebook, Apple, Yahoo, Spotify, etc.) would never have been built and we would have one explanation as to why there is so much crap, expensive, bloated software in the world.

Iterate, don’t estimate

iterateI believe iterative (Agile) development is 100% about making decisions based on customer and/or business value, using empiricism over guesswork and fixing cost by having a fixed team (a la the Spotify “squad” model) with known timeframes (frequent, predictable release dates as opposed to “deadlines”, which are release dates for “fixed” scope based on imaginary constraints). Knowing our costs and delivery dates gives us certainty which allows us to embrace the delicious uncertainty of building great software.

btw – Having a fixed delivery date doesn’t mean that we will necessarily stop building our product on the delivery date. We may have already stopped or we may choose to continue. What it does mean is that we will continually make go/no-go decisions based on the emergent or potential value of what we are building rather than estimating the cost of a particular solution.

Shift focus to “small”

Big-dog-little-dogFrom the team’s point of view, I believe it is far more valuable to get better at breaking down stories JIT (and only JIT – any earlier is potentially wasteful) to be as small as possible (or, at least, as is practically possible) than to “increase velocity”. For me, a high-performing team has the ability to deliver frequent ”done” increments to the product that can derive immediate feedback and/or potential value for those using it. Clearly the smaller the increments the more frequently delivery can happen, which leads to shorter feedback loops and increased learning and flexibility for the PO to prioritise emergent features over features she originally thought she wanted/needed that have diminished in value, or even take a complete change in direction. This, in my opinion, is far more in tune with true business agility.

The importance of how many stories or points gets delivered in a Sprint becomes truly insignificant when the team is delivering frequent changes to the product and putting them in the hands of users. This, for me, is the crux of why software projects are trying to embrace an Agile approach. But until the estimation stops I believe we’re being held back from true high performance which can deliver awesome outcomes for customers.

Further Reading

aa-reading-owl

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Splitting User Stories By The Quality Of Solution

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Atoms_splitI love this approach to splitting up user story value by considering vertical slices through the technical solution.

Iterative and incremental development is a tricky art to master. Delivering very small increments of value takes some practice. With iterative development we must be happy to frequently revisit areas of the system that we are building as we learn more about them, which is quite different from the traditional approach (broad and shallow engineering versus narrow and deep).

This is where I believe the Agile Manifesto authors were coming from when they spoke about “Simplicity, the art of maximising the amount of work not done“. Implementing the simplest technical solution in order to deliver value quickly. It does not necessarily constitute the final solution, and it certainly does not mean “quick and dirty”. We still need code quality (unit/integration/acceptance tests), and the goal is to have a usable system. Something we ourselves would be happy to use and would be able to provide feedback on.

But for an individual user story we are simply trying to meet the goal of that story in the quickest and simplest way possible while providing an acceptable technical solution to meet that purpose. If the code is simple and maintainable we can easily build upon it if required, and the required architecture will evolve as we both iterate and increment.

So we want stories as small as possible (no more than a couple of days of work) and with the simplest acceptable solution under the covers. A good way of looking at it is “what’s the minimum amount of code I need to write to pass the acceptance tests?” (this approach of course leads naturally into the worlds of TDD and BDD, which I encourage you to read more about).

Working this way enables us to get early feedback on the feature and decide whether to invest more effort (via more stories) for that feature, thus allowing the flexibility for the product owner to prioritise a different area of the system if (s)he so wishes.

Some further reading about splitting user stories:

 

Have a great weekend everyone. Perhaps consider making the goal of your Sprint Planning meeting on Monday to split your stories down even smaller using some of the excellent techniques available. The benefits are numerous.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Sustainable Pace – The Fastest Way To Deliver Software

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

santa

So you want your team to deliver software faster?

To demonstrate why this request is nonsensical first imagine a mature, high performing Agile team who delivers on average 10 stories of roughly the same size in every 2-week Sprint (i.e. 1 story per working day).

Now imagine we asked the team to take on just ONE story every Sprint. Their capacity is 10 stories, but we ask them to only deliver 1. What might happen?

Well, we can’t be sure but it is fairly safe to assume that the 1 story is guaranteed to be delivered. We can also be pretty sure that it will be of an extremely high quality, given that the team are working well under capacity and so have plenty of time to dedicate to ensuring a bug-free and pleasant user experience. They may also spend extra time on exploratory testing, ensuring that the whole product, of which this story is a small part, is not hiding some ugly buggy behaviour. If they do find some bugs, they may fix them and add some tests to their regression suite to ensure the bugs don’t recur, increasing the holistic quality and maintainability of the system.

Given that the team knows they are an awesome, high performing team and they have plenty of time to spare in the Sprint, they will likely spend a large portion of their time not working at all. Having fun. Slacking off a little. Giving their brains time to breathe, to reset. Enhancing their team culture and spirit.

From a planning point of view, we may not have speed but we sure have predictability. We know that the team delivers 1 story every Sprint so we can very easily figure out when our product will be delivered with close to (if not exactly) 100% confidence.

OK, now let’s instead imagine we ask the team to deliver 2 stories per Sprint. It’s not too much of a stretch to assume we would get a similar result to the above, except this time some (albeit small) sacrifices will be made. Perhaps some of the extra, luxury activities will be left out. Perhaps all of the aforementioned activities will be done but with less time spent on them. So a little less story and product quality. A little less fun and recuperation time. A little less team building. While it’s highly likely that the team will surely deliver the 2 stories, the probability is slightly less than when we asked them to deliver 1 story. So we have a little less predictability.

What about if we extend this scenario to 5 stories? Then 8? Now imagine we’re struggling to hit a contractual deadline so we feel the need to “speed up”. So we ask the team that predictably delivers 10 stories to deliver 12 (now we’re over capacity). Or even 14?

Hopefully you can see where I’m going with this. The more stories we ask the team to deliver, the less time they can spend on quality, the more likely shortcuts will be taken, the more likely technical debt will be incurred, the more likely team culture and effectiveness will suffer, the less fun will be had, the more fried the team’s brains will be and the less predictable we will become at delivering software.

Read that again – the “faster” we ask (or worse, tell) our teams to go, the less predictable at delivering software we become, and that software is more likely to be of a lower quality. Allowing our teams to deliver at a constant, sustainable pace ensures quality, predictable software delivery, a higher chance of happy teams and happy customers, which leads to higher business value (e.g. profit).

In short, by allowing the team to find the right balance and deliver high quality software at their capacity, a cycle of success is created.

So, managers, please think twice before asking your teams to speed up, i.e. deliver more stories (or story points) than usual in a Sprint or sequence of Sprints. It’s like asking a marathon runner to start running faster after 32k for the final 10k – you’re increasing the chances of long term failure (not completing the marathon at all due to fatigue) for a potential short term gain (running some quicker kilometers).

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

What Price Estimation?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

guessGameIf I want someone to, say, build me a website, in most cases there are two possible constraints I have. I either have a maximum amount I want (or have available) to spend, or I need my website delivered by a particular date. In a truly Agile project, both of these are the same for the supplier because there is a fixed team, i.e. time constraint = budgetary constraint.

Back to my requirements. Let’s say I have $5000 available. If I engage a web design company, I can choose to not tell them my constraint, perhaps because I want to save money and get the “best/cheapest quote”. I can simply ask “how much will my website cost, given that I want x, y and z?”

This is the predicament many software companies have – how do we determine a price for the customer? The answer is invariably to take the customer’s requirements, devise a solution and estimate how long that solution will take. This will then derive the cost to the company, which will determine the price to the customer.

As customers, let’s stop and think about this. Is this the approach I want the web design company to take? Does this provide the best possible value for me? When I engage the web company, would I rather the following:

A: Stay shy about my $5000 budget, and the company comes back and tells me they can build my site for $4500, having based that decision on a fixed design/solution and guess of how long that design will take to build. Perhaps they’ve actually shaved time from the team estimates in order to under-cut a competitor. Perhaps they’ve added on time as a “buffer”, increasing the price for me. We will sign a contract based on a SoW detailing what I will get for my money. If I want to change any of the detail as I start to see the website built I will need to pay extra or I will need to drop out some of the originally agreed features. These small increments will need to be costed accordingly, again based on a guess of how long the new feature will take compared to the original feature.

B: Reveal my budget. They come back and say that my $5000 buys 5 weeks of work, and the team will build the best possible website they can for that price. They might show me examples of other clients’ websites that cost around $5000 to give me an idea of the quality my website will be. They will work with me in weekly iterations to ensure I’m happy with the progress, can change things as we go along and that the key things that are important to me are always being built first. They will deploy my site to a demo URL daily so I can see the site evolve and provide feedback at any time. If after a week, or two weeks, or 3 weeks, I’m not happy with what is being produced I can choose to end the relationship. This makes it clear to me that the web company is absorbing much of my risk and they are very confident they will do a great job for me. I as the customer am the one gauging the progress against my requirements rather than them estimating that they are “on track”. They want to form a working relationship with me in order to build the right thing, and that they might get my repeat business. That I might recommend them to my friends and colleagues. Their mantra is to delight their customers.

Option A requires estimation (guessing/risk/uncertainty), upfront design and makes change hard. Option B requires no estimation, design can change and emerge as we go along, embraces changes as I see the site evolve and shows a company wanting to work closely with me to achieve a result I am delighted with. One that is prepared to front extra risk (of losing money on the contract) because they are so confident in the quality of work they do and of the relationships they form with their customers.

I know which I’d choose. How about you?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Scrum & Kanban – Different, But Compatible, Strokes For Different Folks

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

fight-spamI find it curious when people criticise Scrum as if it is competing with Kanban. I don’t believe it is, and I don’t believe it is particularly worthwhile debating Scrum vs Kanban as two Agile methods because that’s not really the case. Kanban and Scrum have quite different purposes (although they do perhaps have similar intentions).

Put simply, the purpose of Kanban is to create a kaizen culture, one whose primary concern is that of learning, improvement and process evolution using “the scientific method”. Conversely, despite Scrum having lofty yet admirable aims of “changing the world of work”, the purpose of Scrum is to enable teams to develop products effectively. Scrum is generally a bottom-up, team-based approach and so, as the Kanban brigade rightly point out, it is not particularly (if at all) effective at instilling a kaizen culture (fortnightly team retrospectives, even done well, do not create a culture of continuous improvement in an organisation). It’s also not great as an enterprise solution to perceived effectiveness problems unless the organisation really understands the cultural implications of moving to Scrum across the board and has a collective mindset that can buy-in and adapt.

But here’s the rub. To me it’s not about whether an organisation should choose Scrum or Kanban – both are frameworks or methods for different contexts and different intended outcomes. Many companies have identified that they are crap at delivering software and want to get better at it. Rightly or wrongly, these companies are not seeking a kaizen culture. They simply want to deliver software better (by their terms), not improve their effectiveness overall. I am not saying this is a good thing but at least by choosing Scrum to (try and) improve their software delivery it might just get them thinking about the importance of learning and improvement to overall organisational effectiveness. I know from personal experience of coaching new Scrum teams (imposed or not) that they begin to get curious about Scrum and Agile, and then the curiosity spreads to Lean and Kanban. A good coach will introduce teams and their managers to Lean and Kanban concepts and techniques within Scrum (or evolving away from it as the team grows in confidence) as part of a drive for true self-management, measuring, learning and improvement. I have seen, and been part of, many Scrum-ban implementations. They may not have changed their companies for the better as a whole but they certainly helped those companies deliver software better, which is what Scrum ultimately is intended for.

As for the argument about Scrum prescribing roles, meetings and processes, I believe this is down to mindset. If rather than describing the Scrum framework by what it “prescribes” (I prefer the word “recommends” but I will continue to use the word “prescribes” because I see no harm in prescribing something within a framework that one chooses to use) we instead describe it by what it intends, Scrum is a framework for enabling teams to iterate over a product until the business or customer deems it valuable enough to ship. So, if you’re in a position where you want to develop a product iteratively (or at least incrementally) and want to put a team together to do that, Scrum is (potentially) an excellent choice. If you were to choose just Kanban for developing a product, which of course you could, then by default you will not be changing anything about the way you currently work. This is not necessarily a good thing.

For example, Kanban does not prescribe iterations but often Kanban implementations use some kind of iterative process (even if it’s just having a fortnightly review of the product) and teams do this for good reason. Sure, having iterations (Sprints in Scrum) doesn’t guarantee an iterative and incremental approach to building the product but it at least hints it might be a good idea. Even if you don’t fix your scope within the timebox it still makes sense to have (say) fortnightly demos and a chance for everyone to review and evaluate the product holistically. This is a sound and effective approach to software delivery, as borne out by the Agile Manifesto’s recommendation of measuring progress via working software and delivering value early and often.

Similarly, Kanban doesn’t prescribe cross-functional teams, so if you happen to have silos of developers, testers, designers, etc. working in a Waterfall fashion with hand-offs then you will continue to work in that way and not reap the benefits (at least early in the game) of forging collaborative relationships and working as a cross-functional team until such time as the kaizen to try this is agreed upon. This approach may be better in the long run in terms of organisational effectiveness, but in the short term it could be a slow path – too slow for the business to accept – to delivering shippable increments early and often and measuring progress with working software.

Being a framework Scrum prescribes meetings and roles, but without them there is no guidance toward effective delivery of value early and often or the aim of breaking down complex problems by building an end-to-end shippable product in increments as a team – in other words, if you take these meetings and roles away it’s not really a framework is it?! The meetings point out the importance of continuous business/customer feedback, prioritisation and trade offs (as does the Product Backlog), just-in-time planning, correcting your course, team process improvements etc. The roles point out that there is conflict in the traditional Project Manager role between serving the team and serving the business, and that an iterative (Agile) approach to software development requires coaching at both the team and business level, hence the Scrum Master and Product Owner roles.

A product development framework without some semblance of structure renders it useless as a framework. If the framework is abused (as it often is, but this is not the fault of Scrum) then its effectiveness will be diminished or negated completely. But this does not mean that Kanban is better than Scrum for product development or that Scrum should not be used. In the right context and with the right mindset, Scrum can be extremely effective.

To be honest it all depends on context (as it always does) but, put simply, if an organisation wants to change in terms of improving software delivery, Scrum may well be more effective than Kanban. If an organisation recognises that it needs to embrace a kaizen culture, not just to be better at shipping software, then pure Kanban could be the way to go. But trashing Scrum because it is not always good as an enterprise solution (ironically it can be but doesn’t prescribe how to do this) or because it defines structure (which guides towards effective practices congruent with Agile) seems glib to me.

Scrum and Kanban are different approaches for different contexts but can work beautifully together in certain situations (generally product development in a team and company of the right mindset to be open to new collaborative, approaches to delivering value). One can evolve into the other, either way. They are both interesting and have noble principles. There is much to learn, and teach, in both.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Should We Get Rid Of The Product Backlog?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

to-do-list-long1What’s wrong with the Product Backlog?

Many companies and teams are using the idea of backlogs to help them evolve, visualise and order their portfolio of work. In terms of the work required to bring a particular product to fruition, the Product Backlog is often used in conjunction with an iterative development approach as an alternative to documenting a fixed set of requirements and a solution before development work is started.

However, the Product Backlog concept niggles me quite a bit and has actually proven in my experience to be a poisoned chalice in some respects. I actually now believe that constantly adding, removing and tailoring requirements (or stories, use cases, whatever) on the Product Backlog is (especially in the wrong hands) a fairly ineffective and costly approach to building software.

There are several reasons why I believe this to be so:

  • It thwarts innovation
  • It compromises the holistic vision of the product
  • It creates a “requirements black hole”
  • It causes a maintenance overhead (cost, inefficiency)
  • Large queue = high cycle times
  • It makes it difficult for the PO to understand dependencies
  • It trivialises role of PO to one of ordering/prioritisation
Thwarts innovation

A Product Backlog is supposed to be a list of things we might want in the product, ordered by value (value pertaining to importance, ROI or whatever the Product Owner deems to be worthy reasons to satisfy certain particular requirements as the next priority). However, what it often ends up becoming is a big long list of everything we (think we) need to build in the product. Aside from the fact it becomes increasingly difficult to maintain and make sense of this list, building the product becomes a ritual of ordering the backlog and the team building the top things from the backlog in iterations until the product is deemed ready to “go live”.

A problem with this approach is the same problem that one has when building a product based on up-front specification documents – it is not promoting innovation in the product’s evolution. If things are on the backlog then it seems a reasonable assumption that someone has put some thought and time into why that thing should be on the backlog, so there is a tendency (for the PO and team) to want to build the product “as is” and not upset the apple cart too much. In short, the backlog becomes nothing more than a list of up-front requirements which may as well be in a BRD.

A truly iterative approach to building software allows requirements, design and architectural improvements to emerge as we go along. This sometimes means scrapping the whole thing and starting again. If we simply “work from the backlog” we may not pay the necessary attention to determining how best to evolve the product and instead go for the easy option of simply churning out the stuff already on the backlog.

In Scrum, the Sprint Review is intended as a meeting to review the evolution of the product and how it should be taken forward in the next Sprint. Many companies instead have a “Showcase” to demonstrate what has been achieved in the last 2 weeks. This approach completely negates the importance of feedback and putting our heads together to determine the best bang for our buck over the next 2 weeks, i.e. “reviewing” the product.

Many companies plan 4, 5, 6 or more iterations in advance, lining up the “stories” to be done in those iterations and completely skip the innovation part.

Compromises holistic vision of product

For iterative development to work well we must continually evaluate the product as a whole, i.e. we must iterate and increment simultaneously. The Product Backlog does not promote this concept.

Again, there is a tendency when working with a list to just work through the list – to add purely incremental value rather than a holistic approach. This can lead to much re-work, delay and added cost, both from a product value and a technical/architectural point of view.

Requirements black-hole

The idea with the Product Backlog is that we can easily add new requirements to it and re-order things so that if a new opportunity emerges while we’re building the product we can easily prioritise that opportunity and deliver the value fast. In reality what happens is that stakeholders ask for features and the PO adds them to the backlog to keep them happy. This (rightly or wrongly) sets expectations. And with expectations come a whole barrage of politics. The problem here is that the PO can give no guarantees whatsoever that the feature being asked for will ever be built, i.e. the goal posts are moving. Thus the backlog becomes a “requirements black hole”. Do not under-estimate the negative effects of this in terms of trust among colleagues and meeting your goals.

A stakeholder once said to me “when I’m told my request is on the backlog I immediately know it will never be built”. This is often a reality, so is there a better way?

Maintenance overhead

Not only is the Product Backlog a potentially enormous list of stuff, it’s a list that needs to be constantly groomed, usually at least fortnightly, to ensure the highest value things are at the top. Whether you use a backlog management tool or index cards, this creates a significant maintenance overhead (inefficiency) for the PO and team (and potentially other stakeholders).

The backlog can quickly become the focus rather than the product itself, and as it continues to grow it becomes increasingly difficult to prioritise or focus on the highest value things to build.

Large queue = high cycle times

Every new requirement added to the Product Backlog increases the average cycle time to deliver functionality to the users. Having a large Product Backlog can adds weeks, months or (dare I say it) years to cycle times. Is it particularly “agile” to tell a stakeholder that it will take 6 months to deliver a piece of functionality that in effort terms is only a 2-week piece of work? This situation can arise if you let your backlog get out of hand. All the dead wood requirements sitting down the bottom that everyone has forgotten about (but is afraid to delete) are preventing you from being responsive to the market or attacking new high value opportunities.

This potentially means those features that could give you competitive edge in the market will be scrapped for being deemed to take too long to deliver.

Dependencies

A large Product Backlog inevitably creates dependencies among items. Innocently adding a requirement to the backlog can eventuate in a cascade of dependencies that can add months to a project. By glancing at the backlog, are these dependencies transparent? Generally, no. They are invisible to the naked eye and thus have far reaching implications for the PO when trying to effectively order the items on the backlog. It can be extremely frustrating for a PO when the highest value items – requirements that they have taken time, negotiation and effort to prioritise and move to the top of the list – move down the list because of technical or other dependencies.

Taking a more holistic approach to the product makes it easier to dissolve these dependencies.

Trivialises role of PO

The larger the Product Backlog, the more time the PO will need to spend ordering it. This means more prioritisation sessions, more cost-benefit analysis, more workshopping, more estimating in order to determine size for ROI purposes. Little wonder that Product Managers are reluctant to take on the Product Owner role.

The Product Backlog can potentially trivialise the PO role to one of ordering and prioritisation of work rather than concentrating on building the best possible product with which to penetrate the market or increase the value of the business.

So, what’s the alternative?

To my mind, and in my experience, the important things about a product rise to the surface if you are doing proper Just-In-Time planning. By using the Sprint Review and the Sprint Planning meetings properly, the team and PO can properly gauge the evolution of the product and what direction it needs to take next. Why is a Product Backlog required for this? If you can’t remember what needs to be done, it’s not important. If you can remember what needs to be done, you don’t need it on the backlog!

I have found an evolving product roadmap can much more effectively align stakeholder expectations with what’s actually being built. A roadmap is very clear, easily interpreted and gives interested parties the information they crave. In the Sprint Planning meetings, why not ask yourself “How should we take this product forward in the next 2 weeks, and what can we realistically achieve?”. This focuses everyone on what is achievable which helps with simplicity of design as well as focus on value. Then update the roadmap with the new or changed high level ideas emerging from this planning session, and the rough delivery timeframes. It is a mistake to just focus on the next increment of the product in each Sprint Planning meeting. Each iteration should be an opportunity to re-align everyone with the product vision and what the best approach for the next 2 weeks should be.

Conclusion

A Product Backlog done well should paint a picture of the product. It should tell the story of what you aim to achieve. You should be able to show the Product Backlog to someone completely uninvolved and they can gauge exactly what the purpose and vision of the product is. What innate user need it is meeting. The “why” of product development.

If your backlog is simply a long list of stuff that will most likely never be done, perhaps you can look at an alternative approach?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Learning, Believing, Knowing – Does Peru Exist?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

map_of_peru

“Learning is acquiring new, or modifying existing, knowledgebehaviorsskillsvalues, or preferences and may involve synthesizing different types of information.”

 ”Belief, a psychological state in which an individual holds a proposition or premise to be true”

 “Knowledge is a familiarity with someone or something, which can include factsinformationdescriptions, or skills acquired through experience or education.”

I was pondering this morning about the difference between LearningBelieving and Knowing. The differences may seem obvious but I’d like to explore whether the following is true:

  • Does learning lead to knowing or merely to believing?
  • What constitutes knowing something?
  • If a fact requires experience to confirm it, what if we have no experience of the subject of the fact?

We say things like “you learn something new every day!” but how much of the stuff that is absorbed into our brains on a daily basis is actually learning? Since I started using Twitter a couple of years ago I feel that I have learned very much from many people on many subjects. Similarly, as I read blogs, articles and books and talk to people I feel I am learning more and more. But what do we mean when we say we are learning? Do we mean that we are acquiring new facts (or believe we are) or are we merely merging what we are being told and what we have seen and read into our own opinions and views of what we know?

Does Peru exist?

This seems a silly question but I am using it to make an important distinction between knowledge and belief. Of course the answer to this should be a unanimous “yes”. But why am I so sure that Peru exists? I have never been there. I can’t remember talking to anyone who says they have been there. The reason I know it exists is that there is overwhelming evidence to its existence that I have observed. I have seen pictures (claiming to be) taken in Peru. I have seen video footage (supposedly) shot in Peru. I’ve seen (what I’m told is) Peru on satellite images of the Earth. It is a “fact”. Right?

“A fact (derived from the Latin factum, see below) is something that has really occurred or is actually the case. The usual test for a statement of fact is verifiability, that is whether it can be proven to correspond to experience.”

Hang on, so I can only verify that Peru’s existence is a fact if it has proven to correspond to experience? Well I have no experience of Peru, other than the pictures, video, etc. that I’ve seen, so until I’ve actually got on a plane and gone to Peru can I be absolutely 100% sure it exists? If I’m really pushed may my confidence level only be 99.9999999%? I’m relying on other people’s proof and experience for me to be so sure that Peru exists. Rather like we rely on scientific understanding of the world to establish facts that would be impossible for us individually to verify (like gravity) and reject information that is not established as fact (like the existence of a higher being, intelligent design, etc.).

I don’t remember the instant when I first heard there was a country called Peru. Let’s assume as a child I heard someone mention it and I asked my parents “What’s Peru?”, to which my Dad answered “It’s a country in South America”. Now, my question here is: at the point my Dad told me of Peru’s existence as a country in South America, did I learnthat Peru exists or did I simply begin to believe that Peru exists? I was a child so I was also told of Santa Claus and the Tooth Fairy’s existence. What made Peru’s existence more real to me?

Do I know anything?

To give a current, grown up example, I follow a gentleman on Twitter called Bob Marshall (@flowchainsensei) who, among his other achievements, created the Marshall Model of Organisational Evolution. In Bob’s words:

“Simply put, the Model explains how the effectiveness of any knowledge-work organisation is a direct function of the kind of mindset shared collectively by all the folks working in the organisation – managers, executives and employees, all.

effectiveness = f(mindset)”

Since I first learned of the Marshall Model’s existence (I observed it personally, and so can you with the link above, so can verify as a fact that the Marshall Model exists), I have read more about it, interacted with Bob on Twitter and blog posts and from all this have gleaned a genuine interest in organisational effectiveness (thanks Bob, if you’re reading this).

What’s also interesting to me though is how I have embraced the rightshifting concept to a point that I tell others about it. I now know not only about its existence but also what it tells us about organisations. Or do I? Bob came up with the model and so obviously believes, knows it to be a true reflection of organisational effectiveness. But when I read more and talked to Bob about it, did I learn more about the model or do I merely start believing more in the model? Do I now know that effectivness is a function of mindset, do I merely believe it or have I simply learned that someone else believes or knows it?

I have always felt in my career that there are certain types of organisation when it comes to culture and how they get things done, and certainly prosper more readily in, to use Bob’s model, the more rightshifted organisations. So is there a chance that when I saw the Marshall Model my cognitive bias leaned me towards the principles and helped me embrace it as observable and true? Or do I actually have evidence that the model is true and thus I have learned the model’s effects as fact?

My cognitive bias also leaned me towards Agile because the values and principles align with me as a human being. One might call this “mindset“. I coach Agile principles and practices and have observed certain behaviours causing certain results, some repeatedly. But all of my experiences and what I constitute as knowledge is all based on my own view of the work and the world. Without continued learning on everything I think I know about, even things I consider myself an “expert” in, I cannot be sure that I actually know enough, or will ever. For all I know, everyone else I encounter might think I’m a complete duffer when it comes to product development even though I think I’m quite good at it!

Learn to learn

We all use our knowledge every day in our work and our personal lives. I do think though that it’s very important to acknowledge that much of what we think we know may actually just be things we believe and have never actually verified them to be fact.

This is one of the many reasons why learning is the key word from the three used in the title of this post. We cannot know, even believe in something until we have learned about it. I learned about God as a child and started to believe in Him. I learned about Santa Claus and believed in Him too. But I never really knew that either existed. I certainly thought I knew (presents arrived on Christmas Day), but I didn’t. Unless we recognise that we must learn how to learn, then continue to learn daily, infinitely, we cannot purport to truly know anything.

What do you think you know?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

Should We Estimate Software Projects… At All?

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

estimationIntroduction

After a year or two of “having a hunch” about this, and after many years of either estimating work or working to someone else’s estimates, I’ve now finally come to the conclusion that the use of estimation of any kind in a project is not only a waste of time but is actually destructive.

I am fully aware this is an extremely controversial statement, so I am going to be as thorough as I can in explaining how I came to this conclusion via experience, data and validation. Indeed, when I read Vasco Duarte’s post about this several months ago, I saw his “point” (no pun intended) but also argued the merits of using story point estimation for the purposes of:

  • Up-front sizing of a project to determine its validity within a given budget or timeframe
  • Increasing shared understanding and knowledge within the team based on the discussions that arise from a Planning Poker session
  • Allowing the PO to make trade-off decisions between different sized stories (based on ROI)
  • Measuring team velocity
    • To continually validate the initial project sizing by predicting scope-fit within a given release date
    • To allow the team to measure and improve its performance

Why shouldn’t we estimate?

I have since come to the conclusion that some of these things do not need to be done at all, and the other things can be done without the need for estimating (guesswork) of any kind. I would now additionally argue that even if you acknowledge the shortcomings of estimation and use ranges, account for uncertainty, etc., the act of estimation in itself is destructive for the following reasons:

  • “Fixed” scope project delivery expectations are often (always?) based on an up-front estimate of scope (guess) and how long that scope will take to be delivered (another guess), leading to the obvious dysfunctions like death-marches, low quality, etc.

If the budget is fixed, there is no way of going “over budget” in order to deliver the fixed scope. Yet “over budget” is a common term used when describing failed projects. If your budget is truly a constraint then you will only deliver what can be delivered. Agile methods ensure that what you deliver is of the highest value to the business.

I chatted to a team member earlier and he complained of feeling pressure to increase velocity. I asked him where this pressure was coming from and he said that it stemmed from the concern that the project will fail if the team isn’t able to deliver more stories more quickly. No one is actually specifically asking the team to deliver more, but there is an implied pressure to do so because they are aware the budget is running out. This mindset comes from years of poorly funded, gated projects, death marches, focus on productivity rather than quality and canned or failed projects.

  • Asking teams to estimate how long their work will take (or how many points they will deliver in a Sprint or a Release, same thing) has connotations that their output is being measured by an external party (manager), creating an environment of fear and massaging figures to reflect what is desired rather than what is predicted

To increase velocity the team simply needs to over-estimate stories to give the illusion of delivering more. They may not consciously do this but it may happen sub-consciously. The project manager pats them on the back, but all that has happened is the same amount of “done” working software has been delivered.

It’s time to get real and use real data to reflect real progress, whether it’s good news or bad.

  • We shouldn’t be defining all our scope up front, meaning we shouldn’t estimate all our scope up front, meaning we shouldn’t be defining our delivery date based on our scope

We should be fixing our Release 1 delivery date and aiming to build the best possible product by that date (variable scope).

As soon as we introduce the word “estimation”, the default mindset is to consider “how long will this project take?” (if this isn’t asked explicitly). This causes us to consider the complete scope and duration of the project (this is anti-Agile and I won’t go into why it’s a bad idea because enough has been written about that already elsewhere)

How do we size a project?

Short answer – you shouldn’t. If you don’t have a firm deadline for your project (e.g. day 1 of the Grand Prix for a Grand Prix app), you will have a budget for your project (set by the PMO or the external customer), from which you can derive a deadline. The smart thing to do is to then plan an interim release (say at the halfway point) where you can gauge how the project is going based on the working software measure.

For example, if your budget gives you enough cash for ten 2-week Sprints (given a fixed, 100% allocated team), clearly you need to assume that your go-live date is in 20 weeks time. But the aim should be to get working software in a production environment in 2 weeks time (after Sprint 1). You should then iterate over the product, allowing requirements (scope) to emerge and shape the direction the product takes, and take time to reassess after Sprint 5.

These things are not predictable up front – estimation will set you up with a load of scope (expectations) that will not get delivered and will only create unnecessary analysis time (money) and pressure.

How does the team get shared understanding of a story?

Simple. When a new item is added to the top of the product backlog, the team will discuss it in Sprint Planning and break it down if necessary. If it doesn’t need breaking down then it is likely already well understood. If it does then the act of breaking it down will necessitate conversations around the implementation detail that will facilitate shared understanding.

In short, the team does not need to be in an estimation session to discuss and break down a story.

How can the PO make trade-off decisions?

The PO probably needs to know the ROI of a story when introducing it to the team to be delivered. In order to calculate the ROI she needs to know how much it will cost to be delivered (how long).

Here a team would estimate the item using story points and then the PO, armed with the team’s velocity, can estimate the item’s ROI. But without story points how can this be done?

This is where the concept of “implicit estimation” comes into play. In order to create predictability in the flow of work, the team will break down stories just-in-time (in Sprint Planning) so that they are all roughly the same size. This is something that happens naturally throughout the course of the project. Over time the size of stories normalises because the team naturally wants bite-size chunks to work on in the short time period of the Sprint. They get used to delivering a certain number of stories, give or take, in a Sprint.

So for the PO to cost the item, she just needs to ask the team if it is understood or needs breaking down. If the PO considers it high enough priority she will want to introduce it in Sprint Planning so that it gets built right away, if it makes sense to do so. Sprint Planning is the place for the team to break down the story if required and decide if it can be delivered in the Sprint. If it can, the cost of the item is essentially 2 weeks of team wages (assuming production deployment is done at the end of the Sprint – a continuous delivery model can improve speed to market and ROI, but that’s a discussion for another day).

If the item can’t be delivered in the Sprint, the PO can simply look at how many stories have been spawned from the epic item and determine the likelihood of it being delivered in the next Sprint or the Sprint after, based on how many stories the team usually gets through. This leads me nicely on to the topic of how we measure velocity in the absence of story points.

How do we measure velocity?

Now I’m moving firmly into Duarte territory. The answer is we count stories rather than accumulate story points, hence negating the need to estimate. As I mentioned before, teams break stories down into roughly the same size, so counting how many stories are delivered in each Sprint makes for a satisfactory measure of velocity. If the team usually delivers 5 stories with zero defects and then one Sprint delivers 6 or 7 stories with zero defects, an improvement has been made (disregarding variance, which exists whatever unit you use to measure velocity).

Due to the hunch I mentioned earlier, I have been tracking velocity as both story count and points for my current team and making projections using both methods. As I suspected (and as Duarte points out with much supporting data), story count provides just as good, if not better a measure of progress and predictability as story points do. Therefore why spend all the time, cost and effort on estimation sessions and velocity calculations?

While story count works great for velocity, I would still warn against using this or any other velocity measure as a way of predicting when you can deliver. You should know when you are delivering and only be predicting what you can deliver at that date. Don’t leave your delivery date to chance, even if you are using historical data rather than guesswork to predict how many stories can be done.

What you can do, however, is use velocity to help the PO understand scoping trade-offs in the backlog (“the data tells me the team can deliver 20 more stories before the release date, so I’ll make sure the most important 20 are at the top of the backlog“).

Conclusion

It’s taken me several years to come to this conclusion. But, if you think about it, people laugh and joke about estimates all the time. Everyone knows they’re a guess. Everyone knows they’re wrong. Yet we continue to do them. I believe it is time for us to acknowledge that it makes far more sense to eliminate the risk and cost of estimation completely and use only empirical data (as Agile and Scrum promotes) to make predicitions.

In a world without estimation overhead the team is likely to be more happy and productive, the inefficiency of spending time on estimating rather than delivering working software is eliminated and the PO will have real data with which to make decisions rather than guesses made under pressure.

To summarise:

  • Don’t estimate your delivery date – base it on your budget or a firm deadline
  • Don’t estimate your scope – allow it to emerge in order to reap the benefits of building products with agility
  • Don’t explicitly estimate product backlog items (stories)
  • Use historical data (story count) to predict scope delivery on a given date
  • Use just-in-time implicit estimation (story breakdown in Sprint Planning) and past data to estimate cost (ROI) of story delivery

I don’t like to guess, but I predict that not estimating your projects will make success far more probable :)

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

The Horror Of The Scaled Agile Framework

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone

bride-of-frankenstein-boris-karloff-1935

I’ve just watched a presentation that’s made me so angry it’s prompted me to write my first blog post in ages! Sorry I’ve been away so long :)

I’m not a fan of the “Scaled Agile Framework” to say the least. Dean Leffingwell is in on this, a man who I generally find myself agreeing with. However this framework is a horrible, pure money-making bastardisation and Frankenstein of Scrum, Agile and Waterfall that is being sold to large companies who are too afraid to really change and just want to increase productivity, reduce defect counts, etc. and find a place in the “Agile” world for their managers.

The whole concept of iterating over a product rather than simply incrementing features is fundamental to Agile and Scrum but completely bypassed with this framework. Continuous delivery in order to tap into the market as early as possible and adapt the product is ignored (instead a 2-day release plan meeting is held in which all the features the PM wants done in the next 10 weeks are broken down into user stories and put into Sprints – yuk).

There is even a “hardening Sprint” which is a fancy term for a 2-week phase for bug-fixing and deployment activities because companies “really need it” (read it’s too hard to truly get things “done done” so we’ll leave time for it at the end – of course “the end” is a deadline date based on an estimation of how long all the features will take to build – i.e. guesswork around fixed requirements – ring any bells?). Yuk yuk yuk!

Scrum scales perfectly well without this framework, thank you very much! Each product has a backlog, which is derived from an overall program backlog at the portfolio level. Each product has 1 to many synchronised teams – done! Why synchronise the whole frigging organisation’s product development?! Yeah like that will work. Means any one team can’t adapt their process because it’s locked in to the organisation’s “Agile” framework.

Scrum-at-scale is far better because it holds true to the founding principles of Agile and Scrum but also allows hundred of people to be working together towards a common goal. If the business needs to change program priorities then they can do because they are doing Scrum! Simply cease work (if required) on the product or work stream that is being moved down the backlog at the end of the next Sprint and start the team (or a different team) on the new product.

Rant over – for now! Be interested to hear what others think.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Email this to someone