Heuristics (Toolbox #2)

Photo by Patrick Tomasso on Unsplash

tl;dr Heuristics are shortcuts to make decisions or pick what to test next.

πŸ† This post was featured in Software Testing Weekly #100

This is part of my free testing course, focused on teaching you the fundamentals of testing πŸ˜‰


If mnemonics act as “memory shortcuts”, then heuristics are “decision shortcuts”. These mechanisms allow people to function without spending too much to thinking about their next action.

We use heuristics under conditions of uncertainty (…) to rapidly solve problems or make decisions. When you consider the number of decisions people make every day, it makes sense for our brains to use shortcuts to help us quickly assess the different options and decide.

β€” Richard Bradshaw and Sarah Deery

Expressions like “rule of thumb”, “educated guess”, or “intuition” are all examples of humans using heuristics. So consider this rule of thumb, one that you might recall from your student years: “I don’t know the contents of the next exam, but the teacher already mentioned this specific subject three times, so it must be important.”

This example of heuristic is useful to demonstrate two key limitations:

  • All heuristics are fallible. They simplify our context by assuming what is uncertain and ignoring what is contradictory or irrelevant. Given this incomplete context our decisions will be fallible, but there are situations where acting is more important than precision β€” and that’s when heuristics are useful.
  • All heuristics can turn to biases. Prolonged usage of the same heuristics have a negative impact on you and your testing. Without awareness for bias, you will eventually miss or misinterpret information, which create gaps in your testing.

Despite their fallible nature and the potential biases they cause, heuristics are very useful (…) to find solutions that are “good enough” (…) in scenarios where it’s impractical to find the optimal solution to a problem.

β€” Richard Bradshaw and Sarah Deery

Heuristics provide patterns that can be useful in some situations, some times. (…) It’s useful to treat heuristics with a certain amount of distrust.

β€” Anne-Marie Charrett

As with any other tool, it’s important that you understand the advantages and limitations of heuristics, so that you can wisely choose when and which heuristics to apply in your context.

Reliance on an oracle can lead you to the wrong conclusion. A decision rule that is useful but not always correct is called a heuristic.

β€” Cem Kaner

Oracles are considered heuristics, however not all heuristics are oracles: FEW HICCUPS heuristic is an oracle because it tells you how to decide if something is right or wrong; the Goldilocks heuristic is not because it only gives you hints about what to test.

When I test a software application there are a number of things that I know are worth trying. These are my test heuristics. Heuristics are simply experience-based techniques for problem solving and discovery.

β€” Katrina Clokie

When you have doubts about what to test next, there are a number of heuristics you can use to generate new test ideas. With time and experience you will developed your own set of test heuristics.


You will frequently come across heuristics in the form of checklists, cheat sheets, mnemonics, oracles or models. If they serve as cognitive shortcuts to solve problems or make decisions, they’re heuristics.

Once you learn about heuristics, it’s time to practice them in different contexts. (…) When using heuristics you should reflect what worked, what didn’t and why. If a heuristic is not working for you, try another, modify it or make your own.

β€” Richard Bradshaw and Sarah Deery

πŸ’‘ Test ideas

There’s a big number of heuristics you can use to generate test ideas. Elisabeth Hendrickson compiled a cheat sheet with the most common. One of the simplest is the Goldilocks heuristic (named after the bedtime story), which focuses on the concept of “too big”, “too small”, “just right”. For more testing opportunities, check this list from Erik Brickarp.

Let’s say you want to test a new field that collects the age of a user. Inspired by the Goldilocks heuristic, you can observe the behaviour of that field when you type a value that is too big (999), too small (-1), and just right (30).

James Bach‘s Heuristic Test Strategy Model (HTSM) contains more tips on how to explore your product (SFDIPOT) and its non-functional properties (CRUCSPIC STMP).

This presentation from Karen Johnson demonstrates how you can use heuristics like RCRCRC (ideas about what to check on regression testing) or FEW HICCUPPS (oracles focused on consistency) in practice. To discover other mnemonics, check this cheat sheet.

Given the time you have to test is limited, you might want to prioritise your testing by “finding important problems first” and “maximising diversity”. These and other heuristics allow you to focus on using different techniques to reveal different types of critical problems.

If you like learning while having fun, Lena Pejgan created a card games called “Would Heu-risk it?”. It contains a total of 30 tools (things testers use to increase the value of their testing), traps (common mistakes and anti-patterns) and weapons (pieces of wisdom gained from experience).

πŸ•Ά Biases

Bias is an irrational judgement or subconscious inference made from (historical) data available to us.

In testing, biases cause you to miss or focus too much on a specific behaviour or data.

β€” 99 second intro to biases in testing

For example, when you miss something because you are too focused on another thing, that’s a form of bias called “inattentional blindness”. To see this in practice, put yourself to the test with “The Monkey Business Illusion”.

There are many more biases that limit or weaken your testing. When you are conscious of these biases you can minimise their negative impact in your testing. Otherwise biases create gaps in your testing which give bugs an opportunity to go unnoticed until it’s too late.

In order to counter the bias effect of heuristics, Anne-Marie Charrett recommends that you:

  • Diversify your actions β€” e.g. try a smaller resolution, use keyboard shortcuts
  • Diversify your test data β€” e.g. pick a diff user, generate random data
  • Diversify your oracles β€” e.g. show what you found to a diff stakeholder
  • Diversify who is doing the testing β€” e.g. rotate perspectives and expectations
  • Diversify your test environment β€” e.g. use a diff machine or OS, test in production

Katrina Clokie has a few additional suggestions:

  • Change the order of your test approach to break a routine
  • Seek test ideas from non-testers outside your agile team (e.g. UX, Ops)
  • Pair with a tester in another team to see a different test approach first-hand
  • Experiment with a tool that you haven’t tried before
  • Ask for constructive feedback about your testing

Alan Richardson challenges your amount of testing with the next three questions. He also suggests a few words to fill in the blanks: Questioning, Usage, Analysis, Exploration, Reasoning, Experimentation (QUAERE)

  • “Have I performed enough ____?”
  • “Has my ____ been good enough?”
  • “Did my ____ cover everything it could?”

Tom Bartel describes a bias he calls the “curse of knowledge”:

The knowledge that you have gathered becomes natural to you. You become “unconsciously competent”, so you have a harder time explaining it to somebody else. A warning sign is when you start your sentences with “As we all know, …” or “I probably don’t need to explain that…”

This is how you create unsafe environments.

Buster Benson did an amazing job at collecting, explaining and summarising the most common biases that affect us. Here’s a brief (or visual) summary of the four groups of biases.

  • Too much information β€” Our brain uses a few tricks to pick out the bits of info that are most likely going to be useful in some way.
    • We are drawn to details that confirm our own existing beliefs.
    • We notice flaws in others more easily than flaws in ourselves.
    • Repetition, changes in patterns, funny, or bizarre things grab our attention.
  • Not enough meaning β€” The world is very confusing. We connect the dots, fill in the gaps with stuff we already think we know.
    • We find stories and patterns even in sparse data.
    • We imagine things and people we’re familiar with as better.
    • We think we know what others are thinking.
  • Need to act fast β€” We’re constrained by time and information, yet we can’t let that paralyse us.
    • We favour the immediate, relatable thing in front of us over the delayed and distant.
    • We’re motivated to complete things that we’ve already invested time and energy in.
    • We prefer to preserve our status in a group, and to avoid irreversible decisions.
  • Not enough memory β€” We keep what is most likely to prove useful in the future.
    • We discard specifics to form generalities.
    • We store memories differently based on how they were experienced.