Usability testing: Creating usable, functional and enjoyable user experiences

Dec 21, 2022|By atomix
Read time: 8 mins

Why is our website under performing? How do we stand out from the competition? How can we improve user experience and satisfaction for those using our products and services?

These are familiar problems we hear from our clients - all of which can be addressed with usability testing. But what's it all about?

There's a wide set of user research methods used to explore topics and evaluate designs throughout the cyclical design process. For this article we’re going to deep dive into the user research method of usability testing and how it can help you achieve not only great user experiences for your people but deliver on your business and digital strategy goals.

What is usability testing?

Usability testing refers to evaluating how usable a product or service is, by testing it with representative users. Breaking that down a bit further:

  • Usability refers to how easily and intuitively people can use your product or service.

  • Testing infers that there's some form of evaluation taking place. To do this effectively, testing and reporting on usability requires an objective framework to measure and provide insights against. One that is supported by metrics, principles and heuristics. More on this later.

  • Representative users are the people who utilise your product (or service). This can be both internal (e.g. employees or partners) or external (e.g. customers and beneficiaries).

Usability testing also seeks to understand human behaviour underpinning the use of the design - what motivates, drives and affects use. It is primarily qualitative in nature, meaning it’s carried out 1-2-1 with a researcher and participant to highlight problems and identify opportunities within a design and draw out deeper insights into users' behaviours, expectations, frustrations and goals.

Stakeholder watching a usability testing session from the observation area

What is the importance of usability testing?

Poor usability of a product or service means poor user experience, and most likely results in reduced engagement and worse still abandonment of use altogether.

In digital, it becomes pretty transparent as these negative user experiences translate to reduced conversion or performance metrics. Metrics such as lead enquiries, bookings, purchases, content engagement go down and bounce rates, cost-per-conversion, abandoned shopping carts and time on task go up.

While it’s challenging to define the exact return on investment from usability testing (a good article that breaks down the method complexity is here), there are plenty of great statistics published to support why we need to test for usability or to invest in UX design to fix usability problems. In an article on UX statistics from the Baymard Institute, some demonstrated costs of poor usability and user frustration are:

  • 90% of users have stopped using an app due to poor performance.

  • 88% of online consumers report that they are less likely to return to a site after a bad experience.

  • Mobile users are five times more likely to abandon a task if the website isn’t optimised for mobile.

Not only does poor usability hurt revenue, customer trust and loyalty, it increases development and maintenance costs. Catching usability issues earlier in the design and delivery process means less developer costs and time spent on fixing production level code. It also enables early pivoting and/or culling of less valuable features and content, thus ensuring more relevant and meaningful experiences delivered to customers.

A structured reporting model for effective decision making

Why does usability testing require an objective framework to measure and provide insights against? Creating strong insights based on industry best practise (and avoiding fluffery and opinion-based ones) is key to building trust within the team, gaining stakeholder buy-in and advocacy and designing experiences that work.

What do they look like? Firstly, you can use usability metrics such as:

  • Task success/task success rate

  • Time to task completion

  • Task effort

  • Error occurrence/rate

  • Conversions

  • Completion rate

  • Task satisfaction and confidence levels

  • System Usability Scale (SUS)

Secondly, use heuristics (essentially rules, principles or best practice guidelines) to score a participant's performance against.

There are many sets of heuristics you can use for your framework. The good ones we’ve drawn from and adapted for our UX projects are:

For example, for the heuristic ‘Consistency and Standards’, you might define this as “A user should not be confused (or have questions) due to words, interactions and actions being used inconsistently across the [website]? Does the experience follow clear industry and platform conventions?”

In your usability test, if you observe your participant struggling to find the login button or search function on the website, there’s a good chance the design is breaking this heuristic as we know these functions are mostly always found in the primary navigation bar at the top of the page.

This is closely related to the principle ‘Don’t make the user think’, in that any time we distract the user on their journey, we introduce the possibility of losing their train of thought and consequently, losing the conversion.

Gain actionable usability insights.

If you’d like any advice on running usability testing we're here to help.

Types of usability testing

How you go about running your usability testing is highly dependent on your research goals, participant availability and location, time and budget, what assets you have to test on and what resources you have available.

Below are the different types of usability testing and some key considerations for each:

Qualitative vs quantitative usability testing

Is your test performed with a few participants to gain a deeper understanding of the 'why'? This is considered qualitative. When larger numbers of participants are used to gain statistical significance and confidence, it is considered quantitative. It's one of the first questions we ask in scoping for research projects.

We have lots of discussions about sample sizes internally and with our clients, as it is still one of the largely misunderstood (and tricky) ‘it depends’ type scenarios.

The simple rule of thumb we go by is testing with as little as 5 users can unearth a majority of usability problems. We then work on another rule that if multiple distinct personas are present, we’ll increase the sample size to include 3-4 participants of each persona, to allow for trends in data to arise.

On the other hand, if your team requires statistical significance and confidence that your findings will be representative of a population, then you can use a sample size calculator to determine the number of participants you need in your study.

Unmoderated vs moderated usability testing

Is the person doing the test on their own (unmoderated) or accompanied by a moderator/ facilitator (moderated)?

Moderated testing provides benefits of the facilitator being able to query participants and gaining detailed responses, ensuring that the test results are high-quality and the data is valid. However, it can be time intensive for the facilitator (and usually another person taking detailed notes of the session). It also requires additional time for participants and moderators to align schedules together which can be challenging for some audiences (e.g. shift workers and professionals).

One thing we love about moderated testing is that it allows for more flexibility of the test assets used. For example, if we don't have a working clickable prototype accessible via a web or testing platform link, we can create a static paper prototype with high-fidelity content and visuals to use instead. It gives us rapid insights and the ability to iterate in the right direction earlier in the design process, with little outlay on expensive prototype (or worse still production code) development costs.

Usability testing with a paper prototype

Unmoderated provides participants more flexibility in scheduling and greater reach of participants - both geographically and for increased sample sizes, however often you’re left with the frustration of poor participant engagement and not being able to probe further into observations and comments made by participants during the session.

Remote vs in-lab usability testing

You can bring participants into the business and conduct the sessions in a ‘lab’ setting, consisting of a quiet room for the moderator and participant to carry out the session and an observation area where stakeholders can attend to also observe the sessions. This could be through one-way glass or screened to a big screen.

This type of testing can save time for the researchers (not having to travel or negotiate unknown testing environments) and reduce technical and connectivity issues. It can also add control and rigour to the testing environment allowing for streamlined management of stakeholders (observing) and post-test synthesis. The downside is that it can bring about unnecessary biases such as the Hawthorne effect, social desirability and cognitive bias.

Remote testing involves the participant carrying out the session in an environment that is contextual to their typical use (e.g. in a supermarket where they’ll be using a budgeting app). It provides better insight into actual behaviour, pain points and goals surrounding the product (and/or service). It is often easier to schedule with the participant as it can be organised in their home, close to home in a place that is familiar. This saves time and payment of incentives to participants but can add time and cost for the researchers.

In-lab is typically run moderated whereas remote can be moderated or unmoderated.

When to do usability testing

You can apply usability testing at anytime throughout the design and discovery process. Ideally, it should be every stage, but if you have a limited budget and have to choose, invest your pennies in the design and testing phase, and conduct usability tests on clickable prototypes with enough detail (content, information, interaction and visual design) to observe behaviour as participants interact with the prototypes naturally.

Here’s where you can typically find the various types of usability research:

Discover & define

Benchmark usability test the current product (e.g. website), competitors or best-in-class examples.

Design & test

Paper based or clickable prototype testing of designs (content, visual, information & interaction design).

Build & test

Functional test the production build ensuring that usability is incorporated into user acceptance testing.

Launch & beyond

Ongoing usability testing of the live product in market to assist in business and digital growth strategy.

Proof that real insights and data drive business success

At Atomix, we use real data and insights from the people who interact with your brand, products and services to drive your digital and customer experience strategy.

Take a look at some of our case studies where usability testing has deeply informed strategy for our clients:

If you’d like any advice on how you can get started with usability testing or build on your existing practices, we’re here to help.

Email us at hello@atomix.com.au for a free consult with our UX design team.

Implementing practical and effective usability testing is our speciality.

Share this post...