Testing, depending on who you ask, can be a lot of things. From mindlessly following the detailed instructions written by someone else (perhaps years ago, for a totally different version of the product), to the challenging, inspiring task of using your intellect to gather information through exploration of a product or project, and all sorts of shades in-between. Is any one approach more correct or valid than the other?
The answer, I think, is “It depends”.
It depends on what you, or whoever pays your salary if you are the person doing the testing, wants to get from the testing:
- To be able to present the appropriate collection of artifacts (test specs, reports, estimates, whatever) as demanded by The Process?
- To alleviate the programmers plight of doing even basic testing on their own creation, by throwing the software over the wall to those lowly monkey testers (programmers, after all, are so much more valuable and their time should be spent on important stuff)?
- To place a check mark next to every requirement so we can show the customer that we have done proper testing?.
- To gather as much information about the product/project as reasonably possible within the given constraints (available time and resources) to help the people in charge make the best informed decisions about how to go on with the project (are we ready to ship? do we need more people? do we need fewer people? do we need to change priorities? do we need a different skill set? Does the product work? Does it actually solve the problem? etc.).
There are of course many more shades here as well, but Ill keep the list short for now. The funny thing is; when I look back on what I have seen, heard, and read about testing, it seems that in many cases people live under the illusion that they are aiming for point 4 (at least thats what they would tell you) while their work is quite clarely driven by one or more of the first three points.
In some of the scenarios listed you probably would want a “tester” with mean word processing skills. One that can spend a few weeks, or even months, on rewriting the information already present in the requirements, or other documents, into detailed, step by step instruction. This way big, comprehensive documents can be produced, document checklists can be completed, milestones can be reached, people can take pride in “good” work. and great happiness is ensured. A lot of new information is not gathered, but maybe that wasn’t the real goal anyway?
Sure, most of the time there is need for some documentation. However, that doesn’t mean that maximising the amount of documentation is better. If your goal is somewhat consistent with point 4 above, then doing testing and relaying the information is what creates value. Let the testers provide the value they are best suited to and if you find somebody willing to pay for huge amounts of documents nobody is going to read, well get some Certified Word Processing Engineer on the case. That is probably more efficient for all parts.
If you want to get information you need to spend time gathering it, or doing things that enable you to do so. Any activity spent on other activities are diminishing your RO(T)I (Return On (Testing) Investment). This might of course not be a big issue if you don’t invest in your testers, but if that is the case why even pretend you are doing testing? If your testers are not trained or educated to do more than happy path testing by following a prepared script, and you are not interested in doing something about it, then your quality problems has nothing to do with your testers.
One thought on “Monkey business”