I received some interesting feedback and found some interesting connections, after yesterday’s article about “technical debt” and skimping. Let’s think a bit more. Here’s a list of terms and how I’ll be using them today:
- Technical Debt - Ward Cunningham’s original notion of the natural evolution of understanding, and cyclically putting understanding into the code;
- “Technical Debt” (in quotes) - The popular notion of somehow making well-judged decisions about shortcuts to save time, with the idea of catching up later; a fancy term for a generally poor idea;
- Skimping - The common practice of not putting enough time, effort, and thought into one or more aspects of quality, specifically design, code, or testing; a direct word for a bad idea;
- YAGNI - The XP notion, “You’re Not Gonna Need It”, that encourages us to solve today’s problem today, not tomorrow’s, avoiding over-design and over-building;
- Iterative and Incremental Development - The fundamental Agile development approach of building a product over time, keeping it always in condition to be used if someone wanted to use it.
Let’s say a few words about these:
Martin Fowler pointed to this thread from Graham Lea on Twitter, which is a good example of YAGNI, though it doesn’t use the term.
The term itself was created on the first Extreme Programming project, based on something Kent Beck used to say when someone would start proposing some elaborate solution to today’s simple problem, supporting their idea because “We’re gonna need it some day”. Kent would say “You’re not gonna need it”, not really meaning that we wouldn’t need it, but just to draw our attention away from the future and back to today. The notion was named YAGNI for short.
YAGNI advises us always to build the simplest solution to today’s problems – of course, the simplest one that will actually work. Graham Lea’s example is a perfect one. When the list sizes are small, the nested loop makes sense, and YAGNI would say to use it. In Graham’s case, it seems they even thought about the possibility of larger lists and decided that a more robust algorithm wasn’t needed.
Personally, I like to push YAGNI to the limit, because I like to explore what happens when my early designs get in trouble, because I can almost always refactor readily when its needed, I can almost always plug a better algorithm in without disruption, and I’ll learn something when those two things don’t happen. Of course my software is all low risk, so you might wish to be a little more conservative. Be that as it may, the YAGNI notion is to build for today, not to draw in worries about tomorrow.
But that can’t work???
There’s an odd contradiction here. People argue for “Technical Debt” and often the same people argue against YAGNI. Let’s look at YAGNI first, and explore why it’s a good principle.
The fear that people have about applying YAGNI is that if they follow its advice, they’ll put in something simple, then tomorrow’s problems will come along, and the simple thing will need to be replaced … and it will be difficult. Often they say things like “We’ll have to change all the places where we use it”.
If you’re following the XP style of development, there’s not going to be much of a problem. Why not? Simple design:
Kent Beck’s rules of simple design are that the design is “simple” when the code:
- Runs all the tests;
- Communicates all our design ideas;
- Contains no duplication; and
- Minimizes size.
We’ve explored those rules elsewhere. For now, note that if we apply YAGNI in the presence of these rules, there won’t be “all the places” to change. There will be one. Why? Whatever this YAGNIfied idea is, it can only be implemented in one place, because if it were in more places, the duplication rule would make us remove all but one of them. So our double-loop implementation occurs once. We improve it once, if it needs it. No big deal at all.
Iterative, Incremental Development
Agile software development, in any style, recognizes and takes advantage of the fact that software doesn’t come into being on the last day of the project, it is built up bit by bit over the entire course of the effort. An Agile product, done right, is built up feature by feature, behavior by behavior, and is always kept in a ready-to-ship condition.
If what we’ve done so far would be useful to someone,
let’s ship it and let them use it.
This approach, unfortunately, seems almost to be the exception in so-called “Agile” efforts today. Unfortunate because when developers have a product visible and ready to go, it changes the whole conversation:
- Instead of measuring “velocity”, everyone can see the actual product;
- Instead of pushing for more, more, more, people see that with just these three more things, some people could start getting benefit;
- Instead of fantasies about what we should do, we get more concrete ideas;
- Instead of applying pressure out of fear, stakeholders gain confidence that the team is doing the right thing and responding to their input.
It’s not trivial:
There are some things that we absolutely must do to make iterative, incremental development work:
- Instead of doing the bulk of the system design at the beginning, we must do it as we build the product;
- Instead of putting in the whole design at the beginning, since we don’t even have it, we must learn to evolve the design;
- To evolve the design, we need refactoring, the process of improving the design of existing code;
- Since we are evolving the design as we go, we need comprehensive and rapid tests at our disposal, because otherwise we’ll slow down or start shipping broken code.
The XP practices of Programmer and Customer Tests, Test-Driven Development, and Refactoring, are today’s best-known approaches to building a product’s design, code, and tests iteratively and incrementally.1
YAGNI is safe, if …
YAGNI, on its own, might not be safe. We might in fact put some crappy early algorithm into the system and then copy-paste it all over, so that when it goes bad on us, we have to do a massive effort to fix it. If that’s how we’re going to code, YAGNI is more risky and might not be such a good idea.
But when we are keeping the design quality, code quality, and test quality high, YAGNI is quite safe, because each future risk is isolated by the rules of simple design, and it will be easy to find and fix if the time comes.
Is “Technical Debt” == YAGNI?
The term Technical Debt, in the Ward Cunningham sense, is almost never seen in the wild. The term I put in quotes here, “Technical Debt”, has come to mean somehow judiciously deciding to short-cut some aspect of “goodness”, building features somehow faster, with the intention to make up whatever’s lacking later.
It’s possible that proponents of that kind of “Technical Debt” are referring to the kind of thing that Graham Lea’s tweets refer to: choosing a simple but correct algorithm over a more complex one that might work better at some future time, where the simple one is good enough for now.
Well, that may be a good idea, but it’s not Technical Debt. It’s YAGNI. And if these people were propounding and supporting YAGNI, I could maybe get behind it. I’d still have some concerns.
Unfortunately, based on the code I’ve seen from so-called “Agile” projects and teams, legitimate application of YAGNI is not the only thing they mean by “Technical Debt”.
Suppose you’re the kind of programmer who likes to program even fairly complex algorithms “in line”. You see that you have to loop over all these things, then there are three cases so you write some ifs or switches, then inside the brackets for each case, you code the inner loop needed, and inside that you make the database calls you need, and your framework handles the waiting, so right there you then parse the values that come back from the database, with a few cases, so that you embed the necessary if statements, and you just keep going until you’re done.
Now as it happens, I’ve been programming for over a half century, and so I know how to do that in line write it out kind of coding and I’m pretty good at it. Sometimes the algorithm seems clear to me and I just write it down like that. That can be OK for getting something done.2
As soon as that loop-nested if-ridden case-bearing database-reading multi-page batch of code works, we could in principle ship it and move on to the next ticket.
Now, if we had more time, we would of course refactor that code, breaking out the individual sections into functions or methods, improving the names, removing the duplication (you did notice, didn’t you, that several bits inside the if statements were very similar), making the code more readable. We all agree that in the future, we’ll wish it was readable.
But right now, it works, we can ship it, and clean it up later. We can’t call it Skimping or YAGNI, that doesn’t sound professional. Let’s call it “Technical Debt”, call it a rational decision, and move on to the next ticket.
Technical Debt, as Ward defined it, refers to a cycle where our understanding grows so that one day in the future we see a better way and put it in. The above isn’t that: we already have the understanding, and we didn’t put it in.
YAGNI refers to using a simpler but correct solution that may not bear weight at some future time, and YAGNI requires that we install that solution with the same professionalism we will later use for the more robust one, if and when we need it.
Sometimes, we want to call writing spaghetti code “Technical Debt”, and we want to believe that we’re making a good decision to get more done right now, and we want to believe that we’ll clean it up later and everything will be just fine. That puts a nicer face on the facts than they deserve. Dirty facts deserve dirty names.
What we did was Skimping. We didn’t express our design ideas in the code. Our code wasn’t clean, it wasn’t without duplication. And, odds are, it wasn’t all that well tested, either.
Could that still ever be a good idea?
Can Skimping ever work?
Suppose we have a product with a number of not very integrated individual parts, and there’s a trade show coming, and we think our product will be better accepted if it has eight features rather than just five. We see by our progress that if we keep the design quality, code quality, and test quality high, we’ll get five done.
So, since we code in that linear fashion described above, we decide that when a feature seems to be working, we’ll commit it and move on to the next. By so doing, we’ll get more features done, we’ll get more success at the trade show, and then we’ll have time to clean it up later.
We make a “rational decision” to Skimp. And, because that sounds kind of bad, we say “We made a rational decision to take on some ‘Technical Debt’”.
But could this work? Yes, it absolutely could. What are the odds?
If the features really are independent, then whatever damage we do to one by not cleaning it up will be isolated to that one. To the extent that the features aren’t independent, we’re going to have trouble.
Turns out a few of those “independent” features use the same database table. They all need similar parsing of requests and of the returned data. They all need similar error handling. Well, that’s no problem, we put all that stuff in the first feature, we’ll just … hmm … we’ll just copy and paste it judiciously into our new features.
That means that when that code needs improvement, as it inevitably will, we’ll have to do it two, three, five, twenty times. Remember that fear about YAGNI that we’ll have to make changes all over? We just did it to ourselves. Had we factored out those capabilities, we could have referenced them, used them, but kept all the logic (and all the defects) in one place.3
But the copy/past duplication isn’t even the whole problem. When you’re working on the next feature that has similar needs to your first one, you have to go on a snipe hunt every time to find how the first one does that bit. That’s time-consuming and the less well-factored the code is, the worse it gets. The third feature, you have to look at the first two to try to find what you want. The fourth and fifth? Forget it.
Or the search becomes too burdensome so rather than cut and paste, you code that capability over again … or another member of the team does. Now you’ve got the same capability coded in different ways, by different people, in different parts of the system.
And you think you’re going to come back and clean that up? Can I get some money down on the Don’t line? Just like in craps, it’s a pretty good bet.
YAGNI + clean code? Yes. Skimping? No.
In the presence of solid XP-style practice, or solid “craftsman” practice if you swing that way, building the simplest solution that can work can be a good bet. It will pay off if you never have to improve the code, and if you do have to change it, the changes are easy and you’re smarter because time has passed.
But Skimping on the design quality in the code, the code’s quality itself, and the quality of your tests? In my view that almost never pays off.
My advice is “Don’t Skimp”. If you do, keep honest track of how it works out. Maybe even skimp on some features and not on others, to give yourselves a comparison. See where the defects show up, where the changes get harder to make, where new programmers fear to tread.
Chet and I can tell, literally tomorrow, when yesterday’s code wasn’t good enough. And we’re not magical4, we are just paying attention. If you pay attention, I think you’ll likely discover that what I’m saying makes sense.
You might discover otherwise. You might discover that untidy code really can pay off. That would be interesting, and I’d like to come to understand how you decided when, and when not, to Skimp on quality.
Or, of course, you can proceed however you like. It’s your team, your code, your happiness. Either way, I wish you good fortune.
These are not static practices from the end of the last century: they have evolved as we learned ways to do them better and built tools to help us do them better. But the headings, so far, have remained the same. ↩
I have found, through careful observation of myself, that I do even better with TDD, programming by intention, and the more modern XP/Agile style of development. But I can do OK the old way. ↩
Chet told me today of a project he knows about where they had a stored procedure that did some complex thing to the database. Over 2000 lines, I believe he said. Turns out they needed that same complex capability in code, not just in a stored procedure. So they cut and pasted the stored procedure, converted it chunk by chunk to the programming language, and made it work. Of course changes came along, and the language code diverged from the stored procedure. Now the two things are different, and no one knows which one is right. Couldn’t happen? Ha. As soon as you copy/paste a few lines of code, it’s happening to you. ↩
Well, we are actually magical, but fact is, in this case, it’s just that the magic words “clean code”, “TDD”, and “refactoring”, actually work for everyone. They’ll work for you, too. ↩