How to run a usability test: 7 proven tips for effective user testing

Feb 24, 2023|By atomix
Read time: 5 mins

Poor usability of a product or service means poor user experience, and most likely results in reduced engagement and worse still abandonment of use altogether.

Usability testing provides an objective framework to measure, provide insights and recommendations to increase the usability of your products and services, building stakeholder advocacy and designing experiences that work.

Getting usability testing into your design and delivery workflow (and maintaining it) can be overwhelming - but it shouldn’t. Here are some tips on how to do great usability testing and build awesome experiences, no matter what stage you're at in your product or service development process.

1. Scope out the requirements for the research

Start with a research brief. This doesn't need to be an epic process, but its necessary to gain alignment and agreement with your stakeholders and team on the important questions:

  • What are the research objectives?
    This is by far the most single important part to agree on up front. Firstly, are you looking for deep qualitative insights to inform the designs, or to gain a high level of statistical significance and confidence in the usability of the design. Secondly, what specific parts of the design are you focussing on. For example, creating an account (or logging in), purchasing a product or service, submitting a form, finding (navigating) and responding to a specific piece of content or any other other specific actions within your product or service.

A clear understanding of what the study is trying to uncover will determine the research method, participant engagement, timing and budget.

  • Who are you testing with?
    Make sure you engage people that use your product or service or if you haven't launched yet, are representative of the people who would be likely to use it. If not, you're asking for trouble with the insights you gain that may send you off on a wild (and expensive) goose chase. More on this later.

  • What assets can you use for the testing?
    What will you be able to put in front of people to observe and gain feedback on? Is it an existing live product or service in market? A prototype (paper based or clickable/ interactive version)? The production code in development? The further along the design and delivery lifecycle, the more detail (content, information, interaction and visual design) you'll have to observe behaviour as participants interact with the prototypes naturally.

  • What is the deadline & budget?
    At the end of the day, we'd all like to run a regular program of research to back up our design decision making but it's often not feasible. Time and budget imitations should not be a barrier, it just means more clever splicing up of the methodology (in an uncompromising way) to run effective sessions within the budget and timeframe you have.  

A participant and facilitator testing a mobile application design in a usability testing session

2. Create and communicate the test plan

Don’t get bogged down in big lengthy formal test scripts. Instead create a flexible guide that covers the important stuff that you want to communicate to the team:

  • Research goals (what is/ isn’t being investigated).

  • Method overview.  

  • Participant overview. 

  • Tasks and scenarios (including a checklist of observations for each task that you don't want to miss as this aids research rigour).

  • Metrics being collected and/or principles you're going to use during analysis and reporting.

3. Determine the task scenarios to explore

A task scenario is the action that you ask the participant carry out within the session on the test asset(s). They need to be realistic, encourage the participant to do an action, and don’t provide too many hints on how the product or service should be used.

Try not to cover too much off in one session - maybe 2-3 tasks and observations. We've found after many years of running usability tests that shorter sessions make for better attendance, collaboration and advocacy from key team members (devs, content creators, product managers alike) as well as wider stakeholders. Not to mention reducing participant and researcher fatigue. 

You'll want to include the tasks on your test plan, so that everyone is clear how you are framing the activities and what is and isn't being observed. You can read more into the detail around writing tasks scenarios here.

4. Frequency over quantity

Remembering you only need to test with 5 users to start gaining sound feedback on a design, try to schedule a regular cadence of investigation that works in with your delivery cycles. For example, if you're in an Agile team, schedule usability testing in the latter half of each sprint. If it's a waterfall delivery, test towards the end of each design milestone e.g. wireframes, user interface design and interaction design with varying levels of fidelity.

Frequent insights, means less rework over time and a more usable product or service in the end. 

5. Talk to the right people

Ensure your participants are representative of your target audience firstly, in that they meet the use case of your product and service (i.e. have intent and/or desire to use it) and secondly fit the right demographic and psychographic criteria.

Participant location & availability - where and how you can access them - will determine how you run your study. For example, your product may be sold nationally and therefore testing with participants that can only come into your office, may compromise important geographical insights. In this case run moderated remote sessions to engage with them.

Ensure your research practice is inclusive, gaining wider perspectives from people with diverse backgrounds with distinct experiences and needs, and different histories of identity. For example, you can include participants with varying ability status, age, economic class, gender identity and sexual orientation, race or ethnicity, religion, national origin or citizenship status.

6. Always dry run

No matter your level of expertise, we always suggest to run through your task scenarios with a colleague ahead of the first session. This will help you refine your line of questioning, the assets you’re going to test with and sense check if your metrics are feasible. Include some role taking of challenging participants, for example folks that may be negative, opinionative, don’t talk or over-talk to flex your listening and response skills.

7. Sense check your research validity, reliability & objectivity

This means ensuring that all aspects of running the research includes practises that lead toward quality of findings and trustworthiness. Seek to recognise your biases and how to avoid them during research as well as creating research rigour within your process.

Part of this process entails advocating for and facilitating more inclusive research through sound principles and practise. A great article from the research platform Dscout provides a collection of best practices for inclusive design.


Our approach at Atomix is to leverage genuine data and insights from individuals who engage with your brand, products and services to drive your digital and customer experience strategy

If you want to know more about the nuances of implementing practical usability testing - the how, why and key considerations for methods - take a look at our full article on Usability testing: Creating usable, functional and enjoyable user experiences.

If you’d like any advice on how you can get started with usability testing or build on your existing practices, we’re here to help.

Email us at hello@atomix.com.au for a free consult with our UX design team.

Implementing practical and effective usability testing is our speciality.

Share this post...