Test automation is a hot topic right now. Managers think they've found a silver bullet for solving quality control issues, test automation tool vendors and consultants think they've found a steady income stream, and developers are split between thinking automated testing is an essential tool and a waste of time. As in the case of most automated developments, it's complicated, and while it can be supremely useful, it's not foolproof.
I don't envy the managers and decision makers that have to make informed choices while hearing conflicting messages from messengers that have their own hidden agendas. That's why Iíd like to let you know about four things every decision maker needs to know about automated testing in order to define their own test automation strategy.
A common misconception that comes courtesy of test automation tool vendors is that automated testing will save money because you wonít need as many testers. However, test automation and human testing donít serve the same purposes.
Automated testing shines when you have to check that an individual operation returns a certain value when it is invoked with a specified input and other predefined conditions are met. Human testers arenít as good at this type of testing because, well, itís so boring and repetitive. People get bored, they get lazy, and they make mistakes. Computers suffer from no such malady.
Human testers, on the other hand, are best at verifying that complex operations with multiple steps are working as expected. Building automated tests for these complex operations takes so long and results in tests that often fail randomly that the time spent developing the tests may as well be spent doing the testing. Furthermore, only human testers can really tell you if your software is functioning the way a user will need it to.
Software developers love best practices because they donít have to justify their decisions to anyone as long as they follow them. For other reasons too, but mostly that one. Thatís why it will come as no surprise that there are a lot of best practices when it comes to writing automated tests.
The problem with following best practices is that a best practice is only a viable approach when its preconditions are met. This simple reality makes best practices a lot less convenient, so itís a reality thatís often ignored. This is a problem because automated testing (and testing in general) essentially helps you to acquire information from the tested software. In other words, it will help you know if the software is working as expected. The million-dollar question is: how confident do you have to be before you can release it?
Well, it depends on. If you're writing software for a spacecraft, an ER monitor, or a banking system, you have to be very confident. However, if you're writing the next Facebook, you donít have to be as confident because bugs do not generally cause serious consequences, unless the next Facebook deals with private information, of course.
This is why best practice is basically going to be worthless until we can adapt them to fulfill our information gathering needs.
Last ask year I asked my readers to identify the reasons they donít write automated tests. Here are the most popular answers were:
To make matters more interesting, most people who answered this survey had 5+ years of experience in professional software development, meaning that even experienced developers can run into problems that prevent them from writing automated tests. In other words, writing useful automated tests is harder than writing ìnormalî code.
I think one reason for this is that test code is seen as less important than production code. This theory is supported by the fact that developers tend to write fewer automated tests if their managers are pressuring them to finish more features.
Test code is still code, though, and developers get better at coding by writing code. However, since test code is different than production code, writing production code doesnít help developers write better test code and vice versa.
If you want to reap the benefits of automated testing, you must treat test code the same as production code. This means developers must have enough time to write test code, and they must be allowed to rewrite existing tests if they can make them better. Also, if you review production code, you should review test code as well.
Code coverage is a good tool when itís used in the right way. It's a great way to identify the crucial parts of our system that arenít tested properly by automated tests. However, many people seem to strangely believe that 100% code coverage is an absolute requirement.
There are two problems with this:
Enforcing code quality requirements like 100% code coverage often just leads to subpar testing.
Automated testing isn't perfect. Neither is human testing. But once in a while when you combine a couple of imperfect things, you can come up with an end result that's pretty great.
Here are the 5 steps you need to follow to create a test automation strategy.
1. Figure out where you are right now.
If you want to make changes to your existing processes, it is essential that you know where you are right now. Ask yourself the following questions: do we write automated tests? If so, how many of them do we have? Does our current test suite have any problems? If so, what problems does it have? Answer these questions honestly, and record your answers so you can track your progress.
2. Decide how confident you have to be before you can release a new version.
If you want to create a test suite that is optimal for your purposes, you need to decide on the level of confidence you will require before releasing a new version of anything. Whether you're updating software for NASA or creating an app that will remind husbands to floss, the answer to this question will decide the level of required automated testing.
3. Decide what kind of information you need before you can release a new version.
The goal of this phase is to identify the questions that must be answered before you can release a new version. For example, if you need to know that each feature is working exactly as specified, then this is the information you've decided you need. This along with your level of required confidence specifies the requirements of your automated test suite.
4. Give those requirements to your developers and testers, and let them do their job.
The goal of a tool vendor or a consultant is to sell their tools or services. A great idea is to ignore them at first. Instead, give your requirements to your developers and testers, and ask how they can be fulfilled. If your developers and testers suggest that you should get a testing tool or hire an external consultant to help them, do that. Otherwise, keep in mind that if you introduce a new tool or hire an external consultant, you might create unnecessary friction with your employees as they may feel they arenít trusted.
5. Give your developers and testers enough time to fulfill your goals.
If you want your automated tests to be as good as they need to be, you can't pressure your developers and testers about the schedule. One thing that will help you stay on the right path is remembering why you started writing automated tests in the first place. If you had a good reason to do so, and I'm going to assume that you did, it makes no sense to abandon it just because, say, one customer would like to speed things up. Think long-term. Think of the greater good.
About the author:
Petri Kainulainen is passionate about software development and continuous improvement. He is specialized in software development with the Spring Framework and is the author of Spring Data book. He also shares is thoughts regularly on his blog.