What software testing is not and what it can not achieve
This is the second part of a two-part blog post on what software testing is and is not, and what it can and can not achieve. If you have not done so already, please start by reading the first part before this one – the introduction contains some information that is also relevant in this post. The second part is meant to help correct some common misunderstandings about what software testing is capable of.
As was expected, this second part stirred up a little bit of resistance within VALA sales people (sorry about that) so I wish to emphasize again that everything here is based on my personal views and might not be fully in line with VALA’s views. But hey, how great is it that I am still allowed to write and publish stuff like this in VALA’s blog! 😉
What testing is not and what it can not achieve
I am confident you have heard the term “quality assurance”, or QA for short, every now and then. Oftentimes, the term is used interchangeably with “testing” e.g. in job titles – a tester is often called a “QA Engineer” or something similar. In my view, however, that classification is bad and misleading because of the mental images that its use can easily lead to.
Please, allow me to elaborate:
In a typical scenario, a tester is one of many people working in a team that is designing and building a specific software solution. However, the role of a tester is usually quite limited, in terms of authority. Here are some of the things that a tester will typically not be able (or is not allowed) to control, have access to, or might not possess the necessary skills to fix when discovering an issue:
- Infrastructure (servers, databases, routers and general network connectivity etc.)
So how could a tester assure quality if they can not fix a bug they just discovered? How could they assure quality if it became apparent that testing would need more time but they can neither alter the schedule nor add more people to the team? The simple answer is: they can not, because testing is not a quality assurance activity – it is a quality awareness activity!
Quality assurance is a group effort that will only result in a high-quality product through the combined efforts of the entire team – management included. Each piece plays a role in achieving a desired end result but none of them alone will likely be able to do it on their own. Just like you cannot build a jigsaw puzzle with just one piece – you need all of them. Preferably in the right places, too.
As a side note, I feel compelled to say (trust me, this was not requested by VALA – I am writing this of my own accord) that VALA does use the term better, and more responsibly, than many other operators in the field because, at VALA, the term QA specifically refers to the group effort I mentioned, even if testers may have “QA” in their job title.
In classic werewolf movies and fairytales, the hero always saves the day by killing the monster with a silver bullet to the heart and it works just as well every single time. Software testing, however, is no fairytale, and there are no silver bullets. There are no one-size-fits-all solutions because every application is unique, so you can not just apply a static set of testing techniques and approaches, and expect to get a high-quality product as the end result every time. Testing must necessarily (and this one thing I dare claim as a stone-cold fact) adjust to the context-specific needs and requirements of the unique product being developed, in order to be effective and valuable in the context at hand.
For example, the requirements for testing a car’s Engine Control Unit are likely completely different from those of testing the control logic of a skyscraper elevator, or a web shop selling tea online, or an airport surveillance radar.
Because of this, the term “best practice” in software testing is a meaningless, hollow term that usually gets thrown around by people hoping to benefit financially from the mental image it can create in other people. If the needs and requirements between different kinds of products are completely different from one another, then you simply can not use the same tools and approaches on all of them. You cannot fix a broken vase with a hammer even if the hammer is ideal for driving nails into a wall.
To further elaborate on this, a practice can only be “best” if it is the ideal approach in the unique context at hand at the time (because it will likely no longer be the ideal approach later on). However, since no one outside of a software development project can really know what that unique context is, there is no way an outsider could tell you what the “best practices” are in your case.
If you follow a more or less static standard (I am looking at you ISO 29119) that is trying to cover every contingency, then it necessarily loses relevance in any specific context and, if a practice is not fully relevant in your specific context, then how could it possibly be a “best practice” for you? The simple answer is: it can not. The best you can hope for is guaranteed mediocrity. Except there are no guarantees.
Find your own best solutions, and ignore anyone trying to sell you snake oil by painting pretty pictures of perfection without even knowing what it is you are actually doing.
Test automation is a misleading term
Everyone talks of test automation and this may be a little bit of a Don Quijote situation here but, to me, the term “test automation” is a bit of a red flag. Yes, I have used it in the past myself but I have since learned a thing or two about testing and would like to think that nowadays I know better. The problem I have with the term is that it is or, rather, it is often used in a way that is misleading.
Allow me to elaborate:
(Software) testing is a deliberate activity that requires conscious effort. A tester needs to be able to identify potential problems in a wide range of possibilities that are in constant flux, under many different domains. A tester needs to be able to acknowledge and accept (possibly sometimes also to reject or ignore) many different kinds of values (moral, ethical, financial and otherwise) that may change over time. A tester will also possess a lot of knowledge and understanding that is far deeper and wider than what any machine is capable of possessing at the moment. Because of these (and other) factors, only a human is capable of testing. Computers, including the current AIs, mostly just run through algorithmic checklists based on a logic programmed in by a human.
Or, in other words: a computer can check whether or not certain conditions are met, when programmed to do so, but only a human can make the judgment call on what checks are necessary, and when. Also, only a human can interpret the (meaning and importance of the) results, or the lack thereof, and decide whether or not we have a problem. Sometimes, a failing check, or a missing result, is the desired outcome even if, at some other point in time, that exact same outcome would be a problem. A computer does not understand the distinction.
As an easy to understand analogy: a hammer can not know where, or when to drive nails or, how many, or what kinds of, nails are needed. You would likely not want to try and drive those nails to a wall without the hammer, though.
What I mean with the above is that I am absolutely not saying people should not use tools such as a computer while testing, quite the contrary. A computer can be a very valuable and, in some cases, an absolutely necessary tool. Performance testing, in all its forms, is a good example of this. Emulating (to a degree) the interactions of human users and the resource load caused by e.g. hundreds of simultaneous threads on a service would typically be very difficult, if not impossible, to do without performance testing tools and scripts when talking about a software solution that is not yet available to the end users.
The point I am trying to make here is more of a psychological one because the words we use can have a huge impact on how we view the world, what we think about it, and what we come to expect.
Because of that, it would likely be better to replace the term “test automation” with one that is more in line with reality, such as the term “tool-assisted testing”, coined together by the testing gurus James Bach and Michael Bolton (see the links to their blogs at the end of this post). In my view, this term is far better because it paints a more truthful image of reality: a human does the testing, just using tools along the way to make it easier and more effective. The tool does not, can not, test.
However, as I mentioned in the beginning of this chapter, the term “test automation” is so deeply ingrained in the field of IT that it will not go away for a long time – if ever, and that’s OK: the most important thing to understand is what is actually meant when the term is used. This would likely be enough to erase a number of (quality) problems from the field of software testing.
I know some of the points above can be a little controversial. This is partially because of the differing views of the different schools of software testing that I mentioned in the introduction of part one but the issue can also be compounded by e.g. the complexity involved. Software testing is, generally, a relatively poorly understood field of expertise and there are many voices, commercial and non-commercial alike, contradicting each other. Some of those voices make wild claims that are either untrue, inaccurate, or try to trivialize the entire craft. So here are some claims of my own, reiterating the points above, that try to do the exact opposite:
- Software testing is not just about finding bugs
- It is about exploration, critical thinking, learning, and sharing valuable information to people who matter. Finding bugs is only a small, albeit an important, part of that.
- You don’t need the actual software in order to be able to test it (to a degree)
- Software testing is by no means limited to just exercising the product code – also many intangible things relevant to the context, such as ideas, designs, and expectations can be tested.
- Software testing is not quality assurance
- This is because a tester will typically not be able to, or allowed to, make all the decisions and changes that would be required in order to assure quality.
- Quality assurance requires the combined efforts of the whole team.
- There are no “best practices” in software testing
- All software products are unique so the approaches, tools, procedures, methodologies etc. need to be chosen accordingly, on a case by case basis, depending on the context.
- As a result of this, there are no one-size-fits-all solutions.
- There is no test automation
- A computer can not test – that is the human’s territory. A more accurate term, such as “tool-assisted testing”, would likely be better because it is more in line with reality, with the human in control, using various tools to make the testing easier, or more effective.
Links to well-known context-driven testers’ blogs
- Michael Bolton: https://www.developsense.com/blog/
- James Bach: https://www.satisfice.com/blog/
a. Specifically: “Context-Driven Methodology” https://www.satisfice.com/blog/archives/74
- Cem Kaner: https://kaner.com/
About the writer
Petteri is a software testing enthusiast who has been doing testing professionally since 2004. He is an active member in the global testing community and has authored various publications ranging from blog posts to a joint book project with other testing professionals from around the world. Petteri was “Finnish Tester of the Year” nominee in 2011. He currently lives in Estonia but quite often works in the Helsinki metropolitan area.