tl;dr Testability measures the ability to test. When it’s easy, you get deeper and faster info about the product.
Testability measures our human ability to test — how skilled we are, how easy it is to test, and how deep we can go. One of the responsibilities of a tester is to advocate testability within the team, highlighting what is making testing harder or slower.
If testing is questioning a product in order to evaluate it, then testability is anything that makes it easier to question or evaluate that product.
When testing is hard or slow, bugs have more time and opportunity to stay hidden. Those bugs — deeper, less obvious, more intermittent — may be far worse than any bugs discovered so far.
Bret Pettichord defines testability as visibility and control. Visibility is our ability to observe the states, outputs and other side effects of the system under test. Control is our ability to give inputs to the system under test or set it in specific states.
Do not confuse with automatability which measures how easy it is to automate the interaction and control of our system. For instance, logging is a feature that improves testability because it helps humans inspect how the system works; browser cookies enhance automatability because it allows automation to control a user session.
There are four main groups of variables that influence testability: value-related, intrinsic, project-related and subjective. Below are some heuristics adapted from James Bach. To discover even more dimensions that influence testability, refer to Maria Kedemo‘s dimensions of testability.
- Value: changing the quality standard or our knowledge of it.
- Oracles. We need ways to detect each kind of problem that is worth looking for.
- Users. The more we can talk to and observe users, the easier it is to test for them.
- Environment. Testing is more accurate when performed in the users’ environment (or similar).
- Intrinsic (aka. product): changing the system itself.
- Reliability. Issues slow down testing since we must stop to report them or work around them.
- Tolerance. The less quality required or the more risk that can be taken, the less testing is needed.
- Controllability. Ideally we can provide any possible input and invoke any possible state or combination of states easily and on demand.
- Project: changing the conditions under which we test.
- Information. We get all information we want or need to test well.
- Sandboxing. We are free to do any testing without fear of disrupting users or teams.
- Time. We need time to think, prepare and deal with surprises.
- Subjective (aka. tester): changing the tester or the test process.
- Test strategy. A strategy will reduce waste by focusing the testing efforts on what matters.
- Context knowledge. The more we know about the users and the system, the better we can test.
- Technical knowledge. Our knowledge of technology and tools makes testing easier for us.
Here’s a mnemonic to remember these dimensions: usability, security and other -ilities are equally important; testability is VIP as well; thus testability dimensions are
VIPS (value, intrinsic, project, subjective). Here’s another:
This checklist adapted from Ash Winter can be used for a quick health check on your testability. For each question answer Yes (+1) or No (+0). If your final score is below 8, you are working under unnecessary risk.
- Do developers react positively when a bug is reported?
- Can anyone access a prioritised list of the open bugs?
- Does your team measure critical metrics about the system?
- Is it possible to simulate a failure of a dependency (e.g.. 3rd party)?
- Is it possible to test with enough isolation a specific system behaviour?
- Can any team member test an unfinished feature from their machines?
- Can you set your system into a given state to repeat a test?
- Can any team member create a test environment?
- Can you test on production (e.g.. feature flags)?
- Is it possible to see and query logs from production?
- Does your team have regular contact with the users of the system?
- Does your team maintain a knowledge base on how their system is built and tested?
If you were unsatisfied with score you got, there are methods to improve it. Of course you have
boring expensive maturity models in the market to formally evaluate your testability. However, if you prefer something simple and tailored for your team you can use the Test Improvement Assessment. Essentially your team selects which testability criteria are relevant for your context, scores them and finally agrees on how to improve. If you need hints on practices that can improve your system’s testability, Michael Bolton has a few.