Some people think that there is a necessary trade-off between internal code quality, and getting things done rapidly. I believe they are quite mistaken.
On the agile-testing list, Steven Gordon wrote:
The reality is that meeting the actual needs of the market beats quality (which is why we observe so many low quality systems surviving in the wild). Get over it. Just focus on how to attain the most quality we can while still delivering fast enough to discover, evolve and implement the right requirements faster than our competitors.
First of all, whenever anyone says anything starting with “The reality is,” the one thing you know for sure is that they are not describing Reality.
"I reject your reality and substitute my own." -- Adam Savage
I reject Steven’s reality and substitute my own, and it goes like this.
While I would agree with John Ruskin: “There is no product that some man can not make a little worse and sell a little cheaper; those who buy on price alone are this man’s lawful prey,” I believe that the EITHER/OR thinking above is troublesome in two ways.
First, while I am no perfectionist, and certainly not anywhere near perfect, I do hold quality in high regard. I think that is the natural state for many of us, the view of what we may call the “craftsman”. I think it’s a valid view and one that leads to joy and beauty.
Second, low quality is a dangerous business strategy for a company, and a dangerous business strategy for a software developer. Product buyers, and employers, want the best they can get for their money, not the worst.
Third and far more practical a concern is the EITHER/OR position taken above, which assumes – incorrectly in my opinion – that there is a tradeoff between quality and time to delivery. We see rather clearly in manufacturing that high precision in manufacturing reduces costs while getting things done faster and better. It may well be the same in software, that high precision in building software gets it done faster as well. Here’s why:
The fantasy of cutting quality to deliver faster lives on in part because we do not measure that long period at the end where we are taking out the critical bugs that we put in. We assume, wrongly, that that period is necessary, part of the universe. The fact is that if we hadn’t put the critical bugs in, or had discovered them immediately upon doing so and fixed them, that “stabilization” period would be truncated or eliminated.
Low quality has an immediate and visible negative impact on delivering “fast enough”. That negative impact is the stabilization period. That is lost time due directly to low quality. It also has immediate effects that are harder to quantify:
- Working at a low quality level injects more defects. Some of these are so egregious that they have to be fixed immediately. This slows down progress as it is always easier to prevent a defect than to find it and fix it.
- Working in defect-ridden code makes building new features take much longer. We use some existing code that is thought to work, and when our new feature does not work, we assume that there is a defect in our code. Too often, the defect is elsewhere. This slows down progress, often substantially.
- Working in low-quality code makes building features take much longer. It is harder to find the things we need to use or change, and they are harder to separate out for use and modification than they would be in clean code.
Low quality therefore slows down progress directly, through increased fixing, delays in making things work, delays in figuring out how to make things work, and finally in the long delay of the so-called stabilization period.
As far as I can tell, the sole impact of reducing quality is to make things take longer. I can identify no aspect of low quality that does not have an associated side effect of increased defect insertion and increased confusion. Yes, you can argue that if you don’t add too many defects and if you don’t make the code too grubby, you might come out ahead. But that is my point: higher quality makes you go faster; lower quality makes you go slower.
Now what of high quality? Is it possible to spend all our time polishing something and never release it? Certainly it is. Most of us have done it at some point in our lives. This possibility, however, is not a justification for lowering quality, especially not in most of the projects we actually encounter. If your project has a bug list with an upward slope, if you are planning a “stabilization period” that really means you’ll mostly be fixing bugs, then your quality is not too high and you will not go faster by lowering it. You will go slower.
To a first approximation, then, looking at a random project, if we want to speed up, we must increase quality.
Not Everyone Agrees (too bad for them)
I guess we can assume that Steven doesn’t agree with what I’m saying here. Neither does Joe Rainsberger in his posting on The Quality / Speed Barrier. If I understand him, Joe is saying that if your code sucks sufficiently, trying to make it better will slow you down. and that the only way to maintain or increase speed is to suck more. It seems clear to me that on the face of it, this plan is not sustainable, since sufficiently bad code, which is not at all hard to attain, will drag you to a halt.
Now there is of course an element of truth to what Joe is saying and I imagine that Steven would agree as well. Suppose you are in the middle of putting a feature into some code and the code needs improvement. Suppose you have noticed a variable name “joe” that should really be given a meaningful name like “personBeingHired”. It will clearly take more than zero time to do that renaming. Therefore, trying to improve the code will slow you down.
Well, maybe. But if you’ll walk just a little faster to the coffee room next time, you can make that time up and it’ll do wonders for your heart as well. The slowdown from simple improvements can be quite trivial, quite capable of being absorbed in the natural flow of things, without any real impact on delivery at all. And that’s this time.
Next time you pass through this code, all that thinking you had to do to figure out what the hell “joe” meant can be saved. “personBeingHired” makes sense. Your understanding will come faster, and the feature will go in sooner.
This effect might even hit you on the first time through. Is the code a bit intricate? Then improving the variable names can help you understand it this time, and help you avoid errors as you make your changes this time.
This Does Require Skill and Judgment
You do have to know how to come up with a decent variable name. You do need to know how to change the variable name throughout all the code that is impacted. You do have to know how to perform a few more simple refactorings, such as Extract Method. Most of all, you have to know to do a little bit, but not to do too much.
Have you ever been in a company rest room and noticed water on the sink or a couple of towels on the floor? Did you find it just a little offputting? Well, so do other people who enter that room, all day long. At your high pay rate, it would not make sense to pick the lock on the supplies door and scrub down the room. It might not make sense to wait until you are in a rage and then go hunt down a janitor or the head of HR.
But it wouldn’t hurt you to pick up a fallen towel with yours and put it into the waste can, and it wouldn’t hurt you to use your towel to wipe up some of the splashes. It’ll make you feel better, and it’ll make everyone who comes through the room later feel better.
Do that with your code. It makes things go more smoothly and it makes things go faster. And that, my friends, is reality.