A Twitter conversation and some reading has triggered some new, and probably some old, thoughts on estimation and the #noEstimates tag and meme.
The Name it’s Called
Many people object to the tag #noEstimates on the grounds that it is too binary, too contentious, impossible to attain, and so on. My plan is to set that objection aside quickly.
Let’s agree, or pretend to, that the name is bad. OK, but in truth I both agree and disagree.
- I agree because
- The name has generated a lot of unproductive discussion and, from a few notable corners, some visibly unreasoned name-calling. That’s not good. (It may also be The Way of Twitter, and little to do with the tag and much to do with the holding of a differing view.)
- I disagree because
- The name has also generated a lot of productive discussion and experimentation, with good results reported almost universally. (I can recall no tweet or article saying that someone tried “no estimates” in a fashion focused on being productive with it, and got in deep trouble. Maybe they’re out there. But by and large the people who’ve tried it have reported interesting, valuable, and generally harmless results.)
I’ll refer to the ideas as “noE” here, and if you want to think of some better name as you read along, or just go “lalalala” in your head, that’ll be fine.
The Names That Are Called
There are a few people on Twitter who have made it their purpose to attack not just the #noEstimates idea, but to attack the people who support it, whether strongly, as Woody and Vasco seem to do, or a bit more flexibly and philosophically, as I try to do
Continued ad hominem arguments, or blatant mis-characterizations of the opponent’s position, have no place in this kind of discourse. My personal practice has been to block individuals who cannot stay open minded, productive, and at least somewhat impersonal.
But we’re here to talk about the ideas a bit. Let’s get to it.
“Estimation” is not well-defined
Folks who don’t understand, and in particular those who want to argue against noE, often wind up taking the position that any approach that predicts the future and attempts to deal with it is estimation.
Looking at a proposed project and guessing that it could be done between January and June, is estimation. Guessing that it would require around 10 people and cost around a million dollars: again, estimation. I agree with that.
A baseball fielder catching a fly ball observes the ball, runs right and left, back and forth, to be in position to catch it when it comes down. This, too, according to some participants, is also estimation. I don’t agree with that.
As you probably know, there is a clever heuristic for catching the ball that doesn’t involve a lot of calculation and numbers. You hold your glove up ahead of you, roughly where you’d like the ball to land. You observe the ball and your glove. If the ball is to the left of the glove, move left. If it’s to the right, move right. If it’s above, move back. If it’s below, move forward.
If you repeat this iteration rapidly enough, the ball will hit your glove. (In reality, of course, the player doesn’t even need to do the glove positioning: their experience lets them see right left up down more directly.
The point is, there are different ways of predicting the future, and while we haven’t carefully defined what they are, some of them are like writing down a lot of numbers and adding them up, and some of them are pretty different from that.
In the noE discussions, things go better when we’re careful what kinds of estimates we’re talking about. That certainly hasn’t happened often enough. However, observing that fact doesn’t change the fact that there are good ideas in, and good reasons for, the overall movement.
But certainly, there’s a tone of a general “question all estimates” focus, and some differences of personal style, that have led some discussants not to be too specific, sometimes to the detriment of some discussions.
If we get to the level of details here, I’ll try to be fairly specific. We may not go that deep, I’m not sure yet.
I hope we can all agree that there can be, and often are, abuses of software estimates.
“Negotiating” From Power
One common one, which can occur at levels from an entire product down to a few days’ work, is to “ask” developers for an estimate, and then, from a position of power, “negotiate” it downward.
This can be done very gently: “Come on, folks, it’s not that hard, and we’ll give you all the help you need, surely you can get it by May rather than October. We’re all on the same team here.”
Of course, even the above, which was about as gently as I could write it, isn’t all that gentle. “We’re all on the same team here” often (always?) really means “You’d better get with it if you want to be on the team come June”.
Or it can be done very directly: “We’ve promised them May. We have to get it done by May. Tell us what you need, but get it done.”
Or even worse: “I’ll have it by May, or heads will roll”.
And I have to mention the story I heard of some managers talking about an effort that was running behind their desires. One said to the other: “I guess we’ll have to whip the ponies harder”.
This kind of “negotiating” stance is too often really about power and pressure. While it may be true that a little pressure helps, I have my doubts, and there’s no doubt at all that rushed software doesn’t work well, isn’t very maintainable, and is usually later than it would have been had it not been rushed.
Estimates Become Promises
Too often, ranged estimates get pinned to the lower end of the range. “Between June and October” turns into June in management’s mind, and when it starts looking more like September, management sometimes claims “You promised this by June!”
Abuse is Common
Abuses like these are, I’m sure, one of the concerns behind the noE movement. They are common in software development. I’ve been doing software development for half a century and I’ve seen them more times than not. Maybe I’m just unlucky, but no, I think these abuses are quite common.
A natural (but simplistic) reaction is to think “Estimates caused this abuse, let’s not do estimates”. Yes, that is simplistic, but it has some merit.
Another apparently natural (and also simplistic) reaction is to think “Inaccurate estimates that aren’t frequently revised caused this abuse, you should learn to better create and use estimates”. This, too, is simplistic, but with some merit.
Note, however, that there is a pleasing simplicity to “let’s not do estimates” and a world of complexity and learning behind “better create and use estimates”. And not all that learning is in the hands of the developers: the managers in our abuse stories are using estimates poorly. We’d have to get them to change, while meanwhile we are learning the deep techniques of better estimation. That’s not easy, especially if you’re trying to get the damn thing done by June when it’s a September job at best.
The “no estimates” idea is simplistic but simple, while “better estimates” is also simplistic but quite complex, wearing a more “businesslike” outfit.
Let’s touch on the notion of waste: specifically work that isn’t directly producing product value but is instead ancillary to the work. Some waste may be truly necessary, though often it can be addressed. Sometimes it’s worth addressing, sometimes not.
Unless you are in the business of producing estimates, estimates are clearly waste. As such, they should be minimized, made as inexpensive as possible, and eliminated where possible.
That’s not to say there aren’t good reasons behind the request for estimates, and sometimes they are the only way to meet the needs reflected in those reasons. But sometimes there are better ways.
I think the strict “no estimates’ people are saying that the odds are good that we can meet the needs without (bothersome) estimation, and using their stance to trigger the exploration of the real needs and finding better ways to meet them.
- A thing that I find odd, in some of the vocal opponents to these ideas, is that they feel the strict noE people are being recalcitrant and unbusinesslike and rude. My own view, and I know most of the strict noE people, is that they create quite successful teams. They don’t behave unproductively in the actual moment. They use their firm noE stance to trigger more productive ways of working. The strictness is a mental discipline, not a call to be a jerk. (See above re: The Names That Are Called.)
But I digress.
Before I discuss management’s requirement for estimates, need for estimates, and need for answers, coming right up, let’s look a moment at a bit of a systems viewpoint.
The problems of abuse and waste are systemic: the system needs improvement. We could argue very strongly that we should fix the systems problems, improving how we create estimates, how we use them, training our people, including managers to use them well, and so on.
Ideally, this is exactly right. The world would surely be a better place if we all knew how to use estimates perfectly, minimizing their use, maximizing their benefit, and so on.
Exactly right, but a very big problem, especially from the viewpoint of a development team at the bottom of a corporate hierarchy. “You’re using these estimates as weapons, let us show you how to use estimates well” probably isn’t going to gain much traction a few levels up the hierarchy. To do this right, we’d wind up having to revise the OKRs (Objectives and Key Results) all the way up to the top.
A seeming refusal to do low-level estimates in a team, done artfully, will cause that team to figure out other ways to give management what they need.
I emphasize “seeming” and “artfully”. No one is recommending being a jerk about no estimates. The good idea at the core is to use the resistance to estimates to trigger improvement.
Let’s think about that
:Management Requires Estimates
Some writers objecting to noE ideas, and I’m over-simplifying here, seem to take the view that management requires estimates and therefore you’ve got to do them, and given that you’ve got to do them,you should learn to do them well. And if you can’t learn on your own, you’re probably lazy and/or incompetent, and you should hire an expensive consultant, or at least read his expensive book on estimation.
OK, that was too sarcastic. But there really is a strong flavor among some estimation proponents that reads like “management requires estimates, therefore we must do them”.
Well, if management really does require estimates, you really had better do them, and if they misuse them (as they likely do), you really had better help them learn new ways of using them, and, honestly, since they are waste, you should be working to minimize, to reduce costs, and to eliminate them.
But you’re the development team, and your job isn’t to drain the swamp. And fixing management is way outside your scope.
But do they really need those estimates that they “require”?
Does Management Need Estimates … or Only Answers?
What is management going to do with the estimates provided? There are plenty of possibilities, some pretty sensible and some less so. I’ll start with one that is less so.
Recently a colleague reported that a “Sprint Commander” in their organization had asked a team to schedule 20 story points per Sprint rather than 16, so that they’d be like all the other teams, who were scheduling 20.
First of all, I don’t recall “Sprint Commander” being a role in Scrum. In fact I’m pretty sure there are no “commanders” anywhere in Agile, which is rather about self organization.
But what does Sprint Commander really want? Are they making a bar graph showing plans broken out by team, and they all go up to twenty except this one team, and he wants the graph to look good to avoid questions from management, “What’s wrong with Team C there, why are they slacking?”
This could be. But I’d also bet the Sprint Commander is showing actuals as well as estimates on that graph, and that, my friends, is pretty much guaranteed to be bad.
Why, you ask me. First of all, he work of every team is different. The best way to do the work, the best way to divide it up into slices is different. One team may do best with ten chunks, another with twenty, and another with thirty. Forcing the work into the wrong size boxes will mess up the work.
Second, there’s little doubt that management will view that graph and use it to compare teams, and those comparisons usually go negative: “Team B has much much more variation in actuals than Teams A and C. Variation is bad. Get those people to buckle down”.
And in fact, the pro-estimates folks would be very likely to reach first for the argument that Team B does need to learn to estimate better. But, again, the work of every team is different. There may be a lot more discovery to be done in Team B’s area, while A and C are pretty cut and dried. So A and C have done this all before, and B is breaking new ground.
Notice the slippery slope here. Management wants a graph. Sprint Commander makes it pretty, management uses it to micromanage without really understanding what’s going on.
This doesn’t always happen. It happens often enough to make some of us want to approach these estimates a bit differently.
Management Needs Certain Answers
In the Sprint Commander story, it seems to me that management has asked the wrong question, or perhaps Sprint Commander has answered the right question poorly. What might the right question be? We’d have to have a conversation with Sprint Commander and with management to be sure, but …
My guess is that they want to know things like this:
- How well are things going?
- Are we on track to meet my OKRs?
- Will we avoid contract penalties?
- Will the customer get what we promised?
- Is it time to whip the ponies harder?
Maybe they even want to know “How can we help?” Let’s think positively and assume that they, and everyone, are always trying to help.
It should be clear that while estimates and actuals might be part of answering those questions, they do not serve as the answers to any of them.
Sprint commander might do better to show a chart of progress toward the various staged goals of the effort, maybe with lines indicating good, ok, trouble. Possibly – but we should be careful with this – he should show the individual teams separately. The positive side of that would be that a team might need help. The negative side is that those ponies might just get whipped harder.
The trick, and it’s not always easy, is to give management information that enables them to do their outward- and upward-facing job, and to do their job of better enabling the work under them.
I believe, and the noE proponents believe, that estimates, even really good ones, are not always the right information. I’d argue that they are rarely the right information.
Just Count Small Stories
We need to touch on a couple of ideas here, just to kind of keep our minds open about providing management information other than what they originally ask for. One of these is “Just Count Small Stories”.
By the nature of Agile methods, work is divided into backlog items, often called “stories’, one or more of which is pulled by the team, to be completed in a fixed time interval called a Sprint or Iteration. (An option exists where stories are just pulled one after the other, with no particular “must be done by Friday” flavor. This is arguably better, but today the bulk of teams are doing the Sprint-based approach, which is good enough.)
The point is that the time box forces the work to be sliced into smaller pieces. And size does matter: smaller is better.
If you pull just one big story, your Sprint will end with a score of 100 percent or zero. Zero is very bad.
If you were to pull ten stories, odds are good that you’ll finish with at least 80 or 90 percent, and that’s better than zero. If you let yourselves pull extra work if you have time, you might even exceed 100! Furthermore, splitting stories to smaller sizes produces better understanding, as you have to drive out a lot of the hand-waving vagueness, making things more specific.
Even better, quite often when you split a big story, you find some parts of it that you do need right away, but other parts that you can defer until later, perhaps even as an after-release enhancement.
So small stories are good. But there’s another surprising benefit.
As story size gets smaller, the size becomes more consistent. If you never take on anything that seems longer than a day’s work, you’ll soon hit a pretty decent average of 3/4 of a day per story or something like that.
At that point, and people have tried this, if you graph the story points, and the count of stories completed, the graphs look the same.
To produce your progress graph, you can estimate and collect actuals and graph that, or stop estimating and just count the stories. Depending how long you’re spending estimating, you might even get more time for development.
Steer the Effort
Suppose we find that counting small stories is part of giving management the information they need Remember that as we split small stories, we often come up with slices that can be deferred.
This is the sweet spot of Agile Software Development!
It seems that there’s always too much to do, and too little time to do it.
Thinking my darker thoughts, I tend to expect the kinds of harmful “negotiation” and pressure discussed above, which will always compress the time until it’s at or below the minimum to do the job.
Thinking more positively, I’m also certain that even with the best understanding at the beginning of the effort, we’re going to learn a lot about what we need. We’ll learn things about how to build what we need. We’ll discover new requirements, and so on.
Perversely, it seems that we usually discover that more work is needed, and rarely discover that we need less than we anticipated. I have no explanation for that but I know which way I’d bet.
Every Feature Can Go Big or Go Small
Here we are, faced with more work than we can do. It doesn’t really matter whether it’s newly discovered needs, or poor estimates, or excess pressure, or loss of key personnel. Whatever. We have too much to do.
If we are to make the date – and remember, heads will roll if we don’t – we have only one viable option: drop some of the planned work.
Adding staff to a late project makes it later. Forcing overtime, or other forms of whipping the ponies harder, just produces more friction. You might get away with an all-hands weekend, though I doubt it. You will not prosper from a required 60 hour six or seven day work week.
But you’re doing small stories. And every time you split stories, there’s a good chance you’ll split off a few that can be deferred. Deferred stories mean your delivery date improves.
When Agile teams, with their Product Owners, get in the habit of splitting stories and deferring the more frilly bits, they begin to get creative. Often some big story appears, and after discussion, the team realizes that this seemingly big deal can be resolved with something very simple. Sometimes weeks or months can be saved by learning how to “go small” on features until all the product’s functionality is basically covered, and then go back and “go big” on areas where they’ll most help with user or customer satisfaction.
And this really is the sweet spot of Agile.
What does this have to do with “no estimates”?
Presumably reasonably enlightened management would ask the right questions, and reasonably enlightened Sprint Commanders (what is that, anyway???) would answer them with the right information, and reasonably enlightened teams would be working with their Product Owners to consider the real capabilities needed and to slice them down to the necessary minimum, then broaden them in priority order, to be ready to ship the best possible product by the deadline.
Presumably. And reasonably enlightened. Big notions.
Used with a modicum of skill, in some team-level situations, it makes a kind of sense to think of how to replace estimates with other practices and other information. Changing the information flow of the organization changes how it works. It can move the local area, this team and its partner teams and its local managers, in a better direction. As those managers and teams move to a better space, they can influence those around them, and better information moves outward and upward. And since the product is being better steered, we get better on time performance and deliver a better product.
Could “no estimates” have a better name? Maybe, maybe not. But whatever the name is, a blind acceptance of the status quo is always problematical, and in the area of estimates at or near the team level, it’s always worth taking a look.
Do these ideas scale? Do they apply at the corporate level, the state level, the national level? I’m not here to say.
I am here to say that one might do well not to get hung up on the name, instead looking at whatever estimation one does (or recommends) and consider whether there are better forms of information to ask for, and better ways of providing it.