QA tips and tricks: Why a clear and robust test plan is essential

Written by: on December 4, 2015

As a QA engineer, it’s not uncommon for me to hear a lot about new tools for testing software. This is especially true in the world of mobile, which doesn’t seem to have the same industry inertia as other software platforms. Many of the tools I hear about are exciting new methods to automate testing, such as Apple UIAutomator, XCtest and Appium. Strangely, I almost never hear about testing using an old-school human-driven process. There’s a large amount of resistance because it is difficult to “revolutionize” testing methodologies. Nevertheless, we put a lot of emphasis on innovative manual testing at POSSIBLE Mobile. The following outlines my experience and addresses why have having a clear and robust test plan is essential for QA.

Back in high school, I had an algebra professor who emphasized a huge distinction between concepts and execution. He would refer to the execution as “turning the crank.” Given the chiding title it should be no surprise his educational focus had been understanding the concepts. Anyone can follow a solution manual and turn the crank to solve a problem, but only someone who understood the concepts could really pass the final.

When I hear about these new testing tools, I generally view them as new methods to turn the crank in testing. Anything can execute a test plan, after all, there’s no fundamental reason why a human needs to run through a test plan versus a machine. Both are capable of turning the crank. There’s also no fundamental reason why a human needs to write a test plan either. Well, other than machine learning hasn’t gotten that good yet.

The part many of these tools seem to leave out is how these tests get written. The methodologies and concepts need to be grasped before you start turning the crank. Extracting anything useful from automated testing requires a robust set of test cases. You need to have clear goals – launch the app, poke this set of buttons, get this result. This is true regardless of which method is used.

Without a clear test plan, automation or manual, you can spend hours cranking through test suites and fail to find problems. Additionally, if a developer gets shoehorned into writing test cases for an automated system, it is a legitimate possibility the developer will take the “happy path” and only write test cases that succeed. Of course, I’m not implying this will happen out of spite or not caring, rather developers are generally trained to write software, not break it.

Fuzz testing tools, such as Android’s monkeyrunner tool, are other methods folks explore. These tools seem like a nice solution – poke somewhere random on the app. Did it crash? Great! Bug it! Poke somewhere else – repeat until it crashes or you’re satisfied. The question remains, are these useful tools? Quite probably, but will they find random instabilities in your app? Definitely, because almost by definition that’s what they do. However, are they a great choice for verifying functionality? It’s been said enough monkeys with enough typewriters will eventually write the works of Shakespeare, but there’s no mention in that axiom of how long that might take. Imagine the daunting task to have a random process login to a Facebook account.

At the end of the day though, what does it matter? Should someone invest the time to write test cases? Should a company invest the money into a QA department or an automated system?

Your code will be tested. The choice you get to make is by whom and which method. Choose wisely because it could be any of the following:

  • An official QA team verifying functionality
  • A set of automated scripts someone put together
  • The client during a UAT (Aside, if you wait until a UAT to start testing, there is not a dev team on the planet who can fix three months worth of dev bugs in a reasonable timeframe.)
  • A third party developer who sends you angry emails because their services just went down when you moved too fast and broke one too many things
  • An end user leaving 1-star reviews because your apps don’t deliver

Ultimately, you want to be sure your method is robust, efficient and effective. If you enjoyed my article, be sure to check out our article about how to use assertions during development. 

Kirk Chambers

Kirk Chambers

Kirk Chambers is a QA Engineer at POSSIBLE Mobile. Formerly a server-side programmer, he decided he could do more damage breaking things instead of fixing. In his free time, Kirk plays bass in local bands. He is also a Mario Kart enthusiast who enjoys conquering his competitors.

Add your voice to the discussion: