Why do we test our code?
The question itself seems sacrilegious in today’s developer culture. Admitting that you aren’t building a comprehensive suite of unit and integration tests, let alone not testing first, amounts to hanging a giant “newb” sign around your neck. But sacred cows in any culture can lead to stagnation and a lack of innovation; if you can’t provide good cause for a practice that’s demanded of you, why bother doing it?
I’ve worked on a number of different projects, spanning maturity levels, technologies, and team sizes. While most have employed some degree of testing, I can’t think of a single one that employed what would be considered a “robust test suite”. Certainly, open source libraries to which I’ve contributed (and the ones I’ve created) have almost all included solid unit test coverage. But there’s a gap between what many product teams and consulting agencies preach (“We test all our code all the time, and only want developers who believe in testing!”) and what they practice (“We’ve got to get this feature out the door, forget the tests for now.”).
What makes the testing question more complicated is that you can never fully-automate your way to safety. No matter how comprehensive your tests, you’re going to need a QA team. The most glaring need for QA is the fact that you can’t test the appearance of an app — and with the rise of the responsive web, testing appearance (and by extension, usability) becomes even more important. If you need to spend resources on QA-ing an app anyway, why should your development team spend extra time writing tests (or testing first) when they could instead be building features or fixing bugs?
The most compelling answer I’ve found to this question comes from Sandi Metz in her book Practical Object-Oriented Design in Ruby. It’s probably not surprising that a strong case for testing comes from a Rubyist; the Ruby community has been loudly preaching test-first Agile methodologies for years. But Metz articulates ideas in a way that increases their poignancy. “Tests give you confidence to refactor constantly,” she writes. This idea takes the common catchphrase of testing — “red, green, refactor” — and turns it on its head. Instead of refactoring being a part of testing, refactoring becomes the point of testing. “The true purpose of testing,” Metz argues, “is to reduce costs.”
As a Node developer, the idea that testing provides you freedom to refactor is entirely compelling. As I wrote in my article on using private Github repositories for web application submodules, Node developers are constantly on the prowl to refactor common bits of logic into their own encapsulated components. By test-driving feature development, you make it significantly easier to break out common modules:
- Test from the outside in: integration, then unit 2. Implement the feature (make the tests pass) 3. Isolate code that’s useful elsewhere, break it off into its own module, confirm the tests still pass.
In Node-land, “red-green-refactor” becomes “red-green-refactor-publish [module]”.
Further, building proper test coverage may require some additional time from your development team, but it should also decrease the pressure on your QA team — and help everyone involved by reducing coordination costs. Think of a scenario in which one developer bumps up an application dependency to a newer version in order to implement a new feature (or fix a bug). The developer sanity checks the application to see if the new version causes any problems elsewhere in the app; finding none, the updated dependency is added to the app’s manifest and deployed. A week later, a bug report comes in that’s vetted and validated by a member of the QA team, who then needs to coordinate with multiple developers to identify the cause of the bug (a regression from the bumped dependency), fix it, and confirm the fix. The cost of coordinating a fix to this regression clearly outweighs the cost of writing tests in the first place, and this scenario is of the sort that automated testing can easily address. In the Node ecosystem, where we frequently rely on many small third-party modules, it’s especially important to automatically test that our integration with these modules is always working.
But testing can’t reduce costs if we aren’t efficient testers. And being an efficient tester is only partially dependent on your knowledge of good OO design; it is equally important to have a full and proper understanding of the tools with which you need to test.
Which brings me to the point of this post. The Node community has a very effective toolbox for testing first, testing well, and testing comprehensively. But despite the capabilities of these tools, there’s very little discussion on the web of how to use them. An introduction to Mocha or Jasmine doesn’t really explain how to use them to TDD a Node app, and it certainly doesn’t explain how to integrate these libraries into a continuous-deployment setup with Grunt.
In the next few posts, I’ll be stepping through my own “best practices” for testing a Node app. I’ll be the first to admit that there are probably better approaches to testing, and I’ll be quite happy to receive feedback, but hopefully this series can serve as a guide to get the Node testing discussion started. If there’s anything you’re particularly curious about, please let me know and I’ll try to address it.