Some final thoughts on the Gilded Rose exercise. Well, final for now …
This was, after all, an exercise. As such, it makes sense to look back and see what I’ve learned, discovered, come to believe, suspected, stumbled over, or generally picked up along the path.
Breaking out the wrapper lets us add products safely
At the top of my list is this: once I had accomplished the new feature, conjured items, by building a new wrapper class and using it, it was almost certainly time to stop if this was a real situation. Further refactoring offers at most speculative payoff, except for learning. In a real product situation, it probably serves better if we refactor “just enough” rather than undertaking a larger one. I’ve discussed this with some charming graphics here: Refactoring - Not on the backlog
So first: this was a learning exercise and in actual gameplay I’d advise stopping as soon as we had a clean version that used a separate wrapper class for “conjured” vs all others. From that position we could add new shop items readily and we could update older ones if requested, by creating new wrappers. That should hold us for most purposes.
Some changes would make us look inside
When Allison discovers that all the items are being checked for with overly specific names, we’ll be faced with editing inside the Hideous Mess, but the changes might be easy enough to do, by changing string equality compares to calls to our match function.
You may wonder why I didn’t use Ruby’s regex. Two reasons. First, they quickly get weird and I was on a drive to make things less weird not more. Second, once I have the string matching inside my own method, I can extend that method as needed.
Read by Refactoring surprised me
My next “big learning” was the experience inspired by hearing about Arlo Belshee’s “Read by Refactoring” idea. I’ve not had a chance to listen or read to Arlo on the subject, but the phrase was enough to inspire me to scan the code, identify ideas, as you do, and refactor to make them more explicit. In the case of Gilded Rose, that took the form of a few extractions of elementary ideas, and a number of reversals of the sense of
if statements, which seemed to do a lot of “not-equaling”.
Those tiny improvements ultimately added up to a very clean procedural implementation of the program, which surprised me. I felt sure that either a series of wrapper objects embodying the strategy for each update type would be ideal, or perhaps some more clever pluggable behavior implementation using lambdas or the like.
That turned out not to be the case. At the scale of Gilded Rose, the case statement really works out nicely:
def update return if sulfuras? decrement_sell_in case when brie? then update_brie when ticket? then update_ticket when conjured? then update_conjured else update_normal end end
In a real, larger application, I think this procedural approach would likely break down. I suspect that a series of wrappers, not aimed at specific products but at product types, would wind up pleasing me more. YMMV. I’m very inclined toward lots of little objects and methods. Some individuals and teams are not so inclined. While my design sense leans strongly toward having many tiny things, I think the team gets to choose its own coding standard. Just one, though.
All that aside, the Read by Refactoring exercise was quite gratifying. Every little step made things a bit better, and every little step seemed to make the next step more obvious and easier still.
I’ve always said that there are no big refactorings, just lots of little bitty refactorings. This exercise was a perfect example of that for me.
I needed more tests
I hate writing tests for existing code, and I know I’m not the only one. And there were times in this exercise where I was really uncertain whether I had broken the old code or not. More tests would have helped me. A time or two, I wrote them. More often, I did not.
I’m reminded of Norm Kerth’s “Prime Directive” of retrospectives:
Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.
In the past, I believed that was just a posture that we took on, the better to focus on systemic changes, not blaming. But some years ago something that Diana Larsen said hit me: “I’ve never seen a person be the root cause of a problem.” I thought about that for a long time and it finally sank in. It’s not just a mind-twist to look beyond blame: it’s really the case that the “person who made the mistake” isn’t the issue. It’s the whole system in which they live.
This view has become pretty foundational for me. I really believe that people are doing the best they can. That’s not to say that we can’t do better in the future: we certainly can. But in the moment, when we skip a test, or eat all the Ruffles (sorry, dear), or whatever egregiously bad or stupid thing we do … at the time, it was the best we could manage.
So I’m forgiving myself for not writing more tests, while noticing quite clearly that more tests would have given me more confidence in my changes, and would quite likely have saved me time reviewing code and thinking hard instead of relying on the certainty that the tests were comprehensive and things were almost certainly OK.
But you see … I was hot on the trail of building the new feature, or improving the code, or whatever was on the top of my mind … and in those moments, writing tests felt like a diversion, that it would slow me down, even though in fact it would have helped me go faster and with more confidence.
It’s a puzzlement. How can I change myself, and my system of work, to remember to write more and better tests?
Here’s one idea. Once I get my first micro test hookup running, I’m pretty good at building new code in the TDD fashion: just enough to make the current test run. Refactoring on Green, if I’ve built the code in TDD fashion, my tests are generally quite solid enough to keep me confident that I’ve not broken anything.
Maybe I could take on a new mindset during the larger-scale refactoring I did here. Maybe I could train myself to say something like “OK, I’m about to extract this method this way. If I do, what aspect of the system should not change? Let’s test that.”
In that way, I wouldn’t be testing old existing code, I’d be testing my new code, which was created by refactoring. That might feel enough like my existing test-first habits to encourage writing tests that focus on what I’m doing right now, rather than oh god I have to test all this horrible code. Next time I do one of these things, I’ll try that focus.
As for the Ruffles, the only way is not to have them. If there is junk food in the house, I’ll eat it. I have to change the system: it’s the only way.
Commit often. Revert sometimes
My habits cause me to commit at larger intervals than I could. It would be harmless to commit every time a change goes green, but as I’m set up just now, it would distract me, because I’d go into terminal,
git add -A,
git commit, maybe
git push. It’s too easy not to.
Kent Beck has rigged up a scheme that he calls T&C|R, Test and Commit, or Revert, where on every test run, if the tests run green the code is committed, and if not, it’s reverted. That sounds pretty scary, since of course when something doesn’t work we just “quickly” look at the code and fix it. I’ve not tried it, but I’m sure it would reinforce small changes repeated often, which would be a good thing.
I don’t think I’ll try to rig up T&C|R but I will try to improve my frequency of commits. I’ll also be a bit more ready to revert: there were at least a couple of times when I went into a debugging frame of mind, and reverting and doing it over almost always works better for me.
Summing up the summing up
So those are the discoveries from this exercise. To list them:
- This was a lot of fun. I didn’t mention it above, since it’s not exactly a deep learning. Or is it?
- Isolating new features helps
- Building the strategy wrapper object for conjured items opened the door to a continuous flow of new capabilities, and even updates to old ones. It was quite simple to do, and well worth doing. Stopping there would probably be wise.
- Isolating the new might not suffice
- Some features or bug fixes might have forced us to look into the old code. Some changes might be easy. Some would probably be difficult, making us want to refactor the old stuff.
- Read by Refactoring helped
- It was amazingly easy to improve the code little by little, just by reading it, recognizing what some little patch did, and then refactoring to make that more clear. Extracting methods, and simplifying code added up to a really nice outcome.
- More tests
- I absolutely do better when I have a solid net of micro tests. And I almost always have fewer than would be ideal. I’ll try to work on new habits, and I’ll at least consider whether some simple practice change would help with that.
- More commits
- I don’t think I suffered from doing fewer commits than I could have, but it does mean that each commit has several different changes in it. Doing them one at a time would provide a cleaner code base, and possibly serve to reset the mind between Change N and Change N+1
- More reverts
- I only did one revert, when I wanted to do an additional test against an older version of the code, to be sure I hadn’t broken anything. (Did I mention “more tests”?) Once I relearned how to do that, I realized I could probably benefit from being more ready to back out and start over.
All in all this has been an enjoyable and educational little example. If you have the time, consider trying something similar.
Thanks for reading … both of you!