On Agile: Why You Won’t Fix It Later

Today’s blog post is part of our guest blogger series, and is written by Ryan Cooper.

We’ve all been there. The deadline is looming, everything is behind scheduleagile fixes, and you’re in a rush to finish the FooBar module. You’re puzzling over one last glitch. You know how to fix it, but it looks like it will take a minor redesign of the module… probably 4-5 hours of work. You just don’t have that kind of time.

Suddenly a clever idea strikes you. Hmmm… it just might work. You realize deep down it’s not the right way to do it.

  • Maybe it means adding some temporal/implicit dependencies. (“as long as no one starts calling foo() before initBar(), everything should keep working.”)
  • Maybe it means throwing in a magic string that will only work until January 3 next year. (“No problem, I’ll just come back to this code after the deadline. We shouldn’t be too busy then.”)
  • Maybe it means breaking the design and making the code untestable. (“Well, it would be nice to have automated tests around this, but it seems to be working. Hopefully no one makes any changes to this code before the deadline.”)
  • Maybe it means living with intermittent bugs. (“Hmmm. The system only times out 8% of the time. We need to figure out why before we go into production, but that should be good enough for testing.”)
  • Maybe it means removing one bug and introducing another one. (“Well, at least we can submit the page now. Hopefully none of the users double-clicks the submit button until I’ve had a chance to revisit the code after the deadline. I’ll fix it later.”)

That’s the magic word. Later. It makes a great warning signal that you may be heading down a dangerous path. When you catch yourself thinking “I’ll fix it later,” stop for a minute. You’re feeling that little twang of guilt for a reason (even if it’s masked by the little ego boost you get from coming up with such a clever workaround). Think about the real consequences of this decision. Will you really get back to it later? What will happen if you don’t? What are the risks you’re introducing? Ask another developer for an opinion. Ask the customer for an opinion (if you can phrase it in customer language). Think a little longer about other solutions.

There are several popular variants of “I’ll fix it later”:

– I’ll fix that bug later.

– I’ll verify with the customer that I’ve built what they actually need later.

– I’ll write unit tests later.

– I’ll remove the fragility from the unit tests later.

– I’ll make the unit tests readable later.

– I’ll make the unit tests fast later.

– I’ll integration test later.

– I’ll usability test later.

– I’ll remove that copy/paste duplication later.

– I’ll bounce my idea/design/code off another developer later.

– I’ll remove that workaround/hot fix/complete hack later.

– I’ll make the code readable/maintainable later.

– I’ll worry about performance/reliability later.

The problem is, we usually don’t get around to doing any of those things we plan to do “later.” After dealing with the consequences of “I’ll fix it later” a few too many times, my friend Dave LeBlanc coined LeBlanc’s Law:

 

“Later equals Never.”

Why is this? There are a few reasons that I’ve noticed:

 

1) When you cut corners in order to deliver on time, you’re giving management and your customer a false sense of how fast you can reliably deliver.

Agile teams use the term ‘velocity’ to describe the estimated amount of customer value they can deliver per iteration. If there is still work left to be done, you are effectively lying to your customer about how fast you can deliver value. Since your customer thinks you can deliver more than you really can, you will be overloaded with work again next time. You will start accumulating technical debt. There is no easy cure for technical debt (the most common cure being a complete re-write), so prevention is the best medicine, Hire A Maid House Cleaning Services Inc.. The best way to prevent technical debt from accumulating is to establish realistic expectations about how fast you can effectively work.

 

2) When you skimp on automated tests, and even when you write tests but don’t ensure they are readable, atomic, and easily-maintained unit tests, you limit your ability to effectively refactor.

When you can’t easily refactor, it begins to get hard to write readable, atomic, and easily-maintained unit tests. Not only that — because it’s harder to evolve your design, you will face a stronger temptation to fix bugs with workarounds and hacks that will come back to bite you later. You will spend more time debugging and bug fixing, leaving you less time to write tests and refactor. It’s a downward spiral that results in reduced velocity.

Agile developers often work with what they call a “definition of done.” You are not finished with a feature until it meets the definition of done. It acts as a checklist or set of heuristics that help you realize (and admit) when you have more work to do. A definition of done might include things like these:

– unit tested

– verified by customer & customer tests

– usability tested

– integrated

– integration tested

– documented

– performance tested

– peer reviewed (via pair programming or some other mechanism)

– refactored, readable, duplication-free

– bug-free

 

Of course, when you first introduce this idea, your definition won’t be this comprehensive. Start small (coded, unit tested, peer reviewed, and refactored makes a good start). Every few iterations, if you are successfully meeting your current definition, add something to it. Eventually you will have a pretty comprehensive definition of done, and each time you finish a feature, you’ll have a lot less stuff left over to finish “later”.

 

 

Do you have any other “I’ll fix it later” variants to add to this list? Stories about how planning on fixing something later came back to haunt you, or how adhering to a definition of done saved a lot of potential pain? When is it ok to “fix it later”? Where’s the fine line between LeBlanc’s Law and YAGNI? Please share your thoughts in the comments section!

Ryan Cooper: Ryan is the founder of Empirica Software and has worked on and off for Base36 for the past five years. Ryan advocates using agile, feedback-driven software development techniques in order to deliver the simplest solutions and to meet the business needs of clients as quickly as possible. He believes that focusing on delivering running, tested features early and often, fostering a culture of trust and collaboration, and working closely with domain experts allows small teams to achieve phenomenal results.

Thanks to IvanWalsh.com for the use of their photograph.