Yesterday, I tweeted this thread:
We might like to imagine that there’s some perfect mix of design quality, code quality, and test quality, to build a perfect product. But that’s not the case. There’s room for all those elements to mush around:
We can even compensate for imperfect design with additional coding, and for imperfect design and code with testing. (Which is itself imperfect.)
The “technical debt” conundrum is whether we can judiciously use this flexibility to deliver more, sooner.
Now the picture above already begins to show the problem. If we do less work on quality, the features won’t be quite so perfect, because design, code, and test quality are part of perfecting a feature. But can’t we make it up later?
Well, first of all, later’s never coming. Whoever is pushing us for more now isn’t going to turn all fairy godmother tomorrow and slack off. We likely won’t get the chance to clean it up.
Furthermore, it doesn’t work that way anyway. We can’t just go back and jam more design, code, and test quality into that old code, even if we had time to do it. It goes like this: not a neat insertion, but a hard messy job.
So, our dreams of judiciously scaling back design, code, and test quality to go fast now, and making it up later … well, dreams.
Yes, we can go faster now, for a while, by skimping. But we’ll probably never make it up. Our product and progress will suffer forever.
Is it possible to skimp now on design, code, and test quality, and go faster for a bit? Maybe. Will we ever pay off the debt? My money and experience say that’s a bad bet.
Now go do as you see fit.
Further comments …
I was present at a time when Ward Cunningham described his notion of “technical debt”. He was, if I understood him, describing the common case that as we build a system, we learn, and sometimes our learning doesn’t go into the code. Sometimes it simply cannot go in, yet, because we don’t have the learning. Then, later, we get an insight, and the code improves as the new insight gets put in. Ward describes this with a nice gesture of the code coming together. Here’s a nice video of Ward talking about this and other metaphors.
Like so many terms, the term “technical debt” has been warped and modified as time has gone on. It is often used to describe an idea something like this:
Time to market with sufficient features is important to us. We can get more features done, well enough, if we defer some of the necessary design and architecture until later. Then, after we’ve met this market demand, we can fill in that work when the pressure to deliver is less.
My view is that this notion is more likely to be wrong than right. I believe that there is little room for deferring necessary code, design, and test quality, and that most individuals and teams do not have, and cannot have, enough information to know whether they will go faster or not. In fact, I believe that they will usually not go faster.
Be aware that there is a kind of survival bias that can come into play here. Most of us have taken a short-cut at one time or another, and gotten away with it. In the “technical debt” realm, this translates into some grizzled veteran describing the time when, in order to get the thing out, they consciously and with due consideration, deferred some element of quality and got the product out and sure enough it was successful, and therefore, at least sometimes, “technical debt” is OK, at least if you are sufficiently deliberate and grizzled.
Yeah … no. What was just described was not an experiment showing that you can successfully defer design, code, or test quality for a while and catch up later. What was described was “we did this thing and got away with it”. There’s no way to know whether things would have been worse had we not taken the short-cut. No way to know whether we were faster or slower. We had someone hold our beer, and this time we didn’t blow our fingers off.
I’m referring here to three components of software development, calling them design, code and testing, and how they work together in building a system. Yes, there are more components, we could elaborate, you’d like to make this critical point about testing or architecture or your favorite interjection, but for now, we’ll just divide everything into the three: design, code, test.
Each system, or system component, or feature, needs some amount of each of the three to be acceptable. And yes, the amounts are fuzzy.
- With a bit better design, the code might be smaller;
- If the design is a bit skimpy, we can compensate by fiddling with the code;
- If we dial back on what’s “acceptable”, we can do a bit less testing;
- If the code is kind of crummy, we can test a bit harder and remove enough defects;
- … and so on.
There are tradeoffs. No doubt about it. Unfortunately, most of them are after the fact if we proceed in the usual “waterfall” style of design then code then test.
If the design is a bit rushed, we’ll run into problems in the code, and those problems will take up time and space. If we rush the code, we’ll inject more defects, and changes down the line will be more difficult. If we rush the testing, as we often do, there will be more defects and unhappy users.
And there is that tempting little bit in the preceding paragraph: “changes down the line will be more difficult”. The “technical debt” bet is that we can skimp a bit on the design and the code’s tidiness, and it will let us get more done now, at the cost of a slowdown that hits us only after the important deadline. Then we’ll calm down, fix it up, and everything will be copasetic.
Besides, no work of man or woman is perfect, so no program or feature ever gets just the right amount of design and code and test, and we usually get away with it because we’re pretty good at what we do, pretty good at compensating when things go wrong, and pretty quick to fix the bugs. So why not use the fact that we’re pretty good, and dial back the design-code quality focus when we’re in a hurry, and dial it back up when things cool down?
Let’s pretend that whoever asked “why not” was really asking. Here are some reasons why not:
- Going faster creates more defects, and defects don’t go where we want. Some of them go into bad places;
- Skimping on design slows us down, not just after the deadline, but tomorrow;
- Rushing to market almost certainly means limiting testing, not just code quality. That leads to even more released defects;
- Improving system quality gets harder the more the poor quality design and code are used;
You can extend this list of reasons to be more than a little cautious about skimping on design, code, and test quality. I know you can. You know you can, as well.
The question remains: can we ever, with sufficient grizzled deliberate careful judgment, go faster by usefully skimping on design and code quality?
I’m sure the short answer is “yes, we might sometimes go faster by skimping”.
I’m also sure that the longer more accurate answer is “yes, we might sometimes go faster by skimping, but there isn’t as much room as we think, and we aren’t as smart as we think we are”.