In order to verify something is working correctly you must first verify that it was not working.
The above philosophy is a version of the old proverb “If it isn’t broke don’t fix it.” Note the implied act of testing that it is broken. In the various software that I’ve worked on over the past 6 years, I have worked to include and maintain automated test suites. Each application’s test suite includes different kinds of tests as well as tests that were created for a variety of reasons.
The Kinds of Tests
- Unit tests – testing a small chunk of code (i.e. Determine a Page’s URL upon creation)
- Functional tests – testing a single action (i.e. Create a Page)
- Integration tests – testing the interaction of multiple actions (Login → Create a Page → View the new Page → Logout)
The Reasons for a Given Test
- Adding a new feature
- Modifying an existing feature
- Squashing a bug
- Refactoring the code base
- Verifying an interaction
In each of these cases, the code is changing. By taking the time to write a handful of tests, I can better understand the problem as well as work at exposing any other underlying issues.
Just this past week, by working on one test, I discovered the solution to a problem that I hadn’t been able to solve.
Why Automated Tests
The big payoff is that if I have a robust test suite, I can run it at anytime, over and over, and verify that all the tests pass; Which in turn raises my confidence that the tested system is working properly. It does not, however, guarantee that it is working, only that what I’m testing is working.
As an added perk, the tests I write convey what I am expecting the system to do. Which means taking time to understand the tests may help me understand the nuances and interactions of a more complicated software system. The tests also help my fellow programmers understand what is going on.
However test suite that successfully runs need not indicate that the system works. It only verifies that the tests work. Which leads to…
Problems with Automated Tests
- Did I think of all of the possible scenarios?
- Did I properly configure my test environment?
- Did I account for differences between my test environment and the production environment?
- Can I make the test environment as close to the production environment as possible?
And the real big kicker…
Do I Have the Support to Write a Test Suite.
It takes time to write tests, and in some cases people may balk at taking that time, but how are you going to test “the whole system” after you’ve made a “small change” and another…and another…and how about that first change by the new developer…you know the one that came in after your entire programming team got hit by the proverbial bus.
It Doesn’t Work if it isn’t Tested
I was going to test my software anyway, so why not take the time to have the machine do what it’s best at doing: repetitive tasks. I took time to learn how to write tests and then writing explicit instructions for my computer to do the things that I should be doing with each code update.
Is it fool proof? No, but, so long as I’m learning and “taking notes”, my tests are learning as well. Ultimately these tests reflect my understanding of the software application.
Example of Fixing a Bug with Testing
- A software bug is reported.
- I run the test suite and verify all the tests pass.
- I write a test to duplicate the failure. This test must initially fail; After all a failed test verifies that something is broken.
- I update the code until the test passes; I have verified that it is working. I run the test suite and verify all the tests pass.
Now, imagine if your initial test suite had zero tests, and in fixing the bug, you created one test. Run that test after each update to make sure you don’t regress.