Heuristic evaluation: how it works and where to start

By Brett Lawrence

Heuristic evaluation is a fast and effective way to uncover usability challenges and opportunities ahead of digital investment. Our business consultancy director Brett Lawrence explains how it works and why you should adopt this usability technique today.

What is heuristic evaluation?

Heuristic evaluation (also known as heuristic testing) is a process that aims to find usability problems by examining a product and judging its performance. It establishes a backlog of improvements which can be tackled as part of an iterative design process. 

Using heuristic evaluation as part of your usability testing can help you:

  1. Resolve issues that cause friction for users.
  2. Establish new ideas to enhance your product or service, and, in so doing, differentiate it from your competitors.
  3. Take a value-driven approach to developing your digital product or service (by ensuring features or changes in your backlog will realise value when you deliver them).

Put simply, it’s a method that involves a small group of internal and / or external evaluators examining your digital product or service, assessing its usability against a set of recognised usability principles (the ‘heuristics’). 

Heuristic testing plays a key role in testing assumptions and validating ideas as part of a process of continuous improvement.

Combined with workshops to identify your key goals, measures, and assumptions, it allows you to come up with a list of ideas and experiments that you can then test and validate using A/B and multivariate tests.

It’s a tried-and-tested method that’s helping online retailers such as Astrid & Miyu to boost revenue and ensure return on digital investments.

What are the benefits of heuristic evaluation?

It’s challenging to know how and where to invest budget when you’re developing and evolving a digital product, such as an online shop, especially during times of rapid market disruption and technological change.

It’s easy for digital leaders to fall victim to common pitfalls, spending time on vanity measures (features and products that no one wants), so digital investment needs to take ROI into account.

This is the beauty of heuristic evaluation, a cost-effective way to identify the most compelling opportunities for enhancing the customer experience by testing assumptions, validating ideas, and finding small initiatives that can deliver great value.

Heuristic evaluation and experimentation gives you a clear illustration of your business goals and a measurement strategy to help determine your direction. It gives you fresh ideas and small changes for your backlog that are aligned with your goals. And it helps you identify key assumptions to test, with recommendations for small changes to make based on expert insight and real data insights. 

It’s an approach that jewellery retailer Astrid & Miyu used to maximise wins ahead of Black Friday, helping secure a revenue rise of 144% compared with the previous year, and a 133% growth in transactions.

How does heuristics testing work?

Heuristic evaluation originates from the ’90s and was the brainchild of a guy called Jakob Nielsen. 

He was interested in seeing how you could identify usability challenges and opportunities by having a small group of evaluators examine a digital product and judge its performance against a defined set of usability principles, including things like:

  • Systems should speak the user’s language
  • The user shouldn’t have to remember information across journeys
  • Errors should be prevented rather than just captured

The idea is that, by using a small group, it’s possible to find different problems (from different people with different experiences). Naturally there will be some overlap in what you uncover, but having different evaluators means that you will uncover a significant amount of unique problems.

In my own experience, as an Inviqa consultant, I find that smaller groups of four or five evaluators tends to produce the most valuable results, without too much repetition in the findings. My role becomes that of an observer, on hand to guide the process and assist the evaluators where required.

What’s key is to ensure your focus group is made up of people with a diverse range of experiences.

What’s key is to ensure your focus group is made up of people with a diverse range of experiences. If your testing group has been pulled from within your own organisation, for example, you’ll soon realise that the types of challenges identified by an engineer, marcomms professional, and a designer can differ significantly.

That’s why varying the types of people you use for testing is really valuable. In an ideal situation, your test group would all have domain knowledge to give the testing process grounding in best practice and industry standards, but this won’t always be possible.

In these cases, it’s key to ensure that the observer of the testing process does have this knowledge and can provide answers to questions about the domain from any of the evaluators. Likewise, should evaluators get stuck during their testing, the observer should be on hand to provide guidance in order to maximise the value gained from the session. 

So how does it work? You’ll want to focus the group on assessing a particular site goal or key user journey, for example: ‘Using any of the options open to you, how easily can you locate product X?’

Having this focused remit ensures your group will surface problems or opportunities that all relate to the same journey or feature of your digital product.

How should heuristic testers log their suggestions?

Each member of your group should identify and log the challenges or improvement opportunities they’ve identified using this hypothesis format:

‘If we [proposed change goes here]’...

  • ‘It will result in [assumed outcome goes here]’; and
  • ‘We’ll know we’ve succeeded when [metric changes]’

Here’s an example:

‘If we display more detailed product information on the product page’...

  • ‘It will result in customers making a more informed purchase decision’; and
  • ‘We’ll know we’ve succeeded when the number of unwanted products returned reduces’

Using the hypothesis format encourages you to define the expected value that a proposed change will create. In this way, your digital teams are able to prioritise the improvement opportunities into a backlog of small experiments aimed at driving value back to your customers and to the business.

How does heuristic evaluation support experimentation and testing?

Heuristic testing is a crucial part of experimentation and continuous improvement i.e. the process of making sure your digital product or service continues to support the needs of your customers and the business. 

It’s a valuable part of the continuous process of optimising your online shop or other digital product.

" "

Heuristic evaluation is a key part of the ‘idea generation’ phase of this sequence, which includes activities such as (but not limited to):

  • Heuristic testing with focus groups to discover usability problems and
  • improvements to your site.
  • Competitor and industry analysis to explore how your competitors can inspire you to do better.
  • Hypothesis definition to capture ideas in a simple hypotheses format that creates measurable experiments.
  • Assumption mapping to help prioritise ideas and your assumptions around what works and doesn’t work.
  • Ideas prioritisation using benchmark and target metrics data, and to evaluate certainty that an idea will work versus urgency.

What’s key to remember, is that the findings of any usability testing should not be viewed in isolation, and that a blended usability testing approach always generates deeper, more validated insights.

Let your data speak – reviewing your analytics will help uncover challenges and opportunities in the customer journey, and bring additional, actionable insight to your heuristic evaluation. But speaking with your customers, both at the research and testing phases, will uncover user experience improvements that analytics alone cannot.

Heuristic evaluation is a key part of experimentation and ongoing product enhancement, but remember that it must go hand-in-hand with a continuous process of gaining customer insight, reevaluating your digital roadmap, and revisiting your strategic direction.  

Why you should start using this method

Heuristic evaluation is a quick, easy, and cost-effective way to identify usability challenges and improvement opportunities without needing sophisticated testing systems. 

It examines a user interface, judging its compliance with best practice usability principles in a way that identifies quick wins, is easy to validate, and can be done by pretty much anyone (guided by an experienced facilitator).

For these reasons I’d urge you to start using this approach today as part of your continuous improvement initiatives. And if you need support with any area of your product development, from research and design, to testing and prototyping, do speak with the team here at Inviqa.