Lo-fi usability testing – Part 3: Ten top tips
This content was originally published more than ten years ago and is archived here for preservation.
More up-to-date content is available on this blog.
We’ve already covered in the previous articles what usability is and why you need to test it and what you need to do to prepare for your usability tests. In the thrilling* conclusion to this lo-fi usability testing trilogy, we get down to the nitty-gritty of how to run the tests and how to interpret and act on the results.
* It all depends on your perspective
Tip #1 – Your test users must be representative of your target personas #
As we’ve already mentioned, you need five representative users. If you have several distinct personas for your product, you should strive to find five users for each target persona. Strive for five – that’s a good way to look at it.
Tip #2 – Get each user to perform the same tasks, ideally in their own environment #
Observing the user in their own environment can often reveal usability problems with your product that you might never have expected.
- They might reach for their smartphone to browse a website by default rather than the laptop.
- They might be attempting to use the product one-handed while on the phone.
- They might accidentally hit ‘back’ in the browser because they’re using a 5-button mouse instead of a more usual one.
- They may struggle to click on something using a touchpad on their laptop.
- Their keyboard layout may be set for a different language to the one you were expecting.
It’s not the end of the world if you can’t run the usability test in the user’s own environment though. To recap from the previous article on preparation, find a quiet room to run the test where you won’t be interrupted. Your objective is to put the user at ease, so if outside of their usual environment, reiterate that you’re not testing them, they’re helping you to test the product – they can’t get the test “wrong”.
Tip #3 – Get the user to commentate on what they’re doing #
Getting the user to commentate on their thoughts, actions and expectations allows you to ‘read’ their mind. However, it is unnatural for the user and a little difficult, if not slightly embarrassing in some cases. Put the user at ease by giving a quick example yourself and explaining why it’s helpful.
It’s also worth noting that users will tend to go quiet as soon as they have to think a little harder about something they’re doing. Gently remind them to keep commentating by asking open questions such as “what are you thinking?”, “what are you looking for?”
Tip #4 – Observe how users interact with the product #
One of the main advantages of carrying out usability tests in person, as opposed to remotely, is that it gives you the fantastic opportunity to read their body language closely. This can often be more revealing than their actual commentary.
There are many other ‘tells’ that may indicate a user is having a problem and that your product’s usability could be improved. A few things in particular to take note of are when they:
- hunt around with the mouse;
- click on things to see what they do, even if they don’t need to;
- write things down;
- jump out of the product to look for help on Google.
Tip #5 – Let users make mistakes – only step in if they’re completely stuck #
This is a tricky balance to strike. On the one hand, allowing a user to get well and truly stuck gives you a chance to test out how well the product guides the user to resolve the problem (if at all). On the other hand, you don’t want to leave the user to struggle for too long otherwise they’ll become frustrated.
If you as the moderator are familiar with the product, you have to suppress the urge to solve the user’s problem for them. This can be frustrating for you! Remember that what is obvious to you as a relatively expert user may not prove to be so for the test user.
If you do step in to help, only get them just back on track then let them continue from there unassisted.
Tip #6 – Take copious notes #
Note down things they did as well as things they missed. Observe body language, time to find or do something. Record their misunderstandings, annoyances, unexpected events.
You’ll quickly begin to see recurring patterns once you’ve tested a few users. Highlight these for later.
Tip #7 – Ask why users did something in a particular way #
Probe expectations: participants usually have expectations about what will happen before they click on something. Well-timed questions asking them what they expect can reveal a lot about their perception and understanding of the product.
Investigate mistakes: it’s always a good idea to follow up mistakes with a gentle question to understand why, especially when the participant doesn’t realise they’ve made a mistake.
Ask the user open questions (i.e. avoid yes/no questions) and answer their questions with questions – forcing the user to give more feedback:
Test User: “What happens if I click here?”
Observer: “What are you expecting to happen if you click there?”
Also, avoid questions about their likes and preferences. This tends to add bias and turns users into instant design experts. This is not that helpful if you’re primarily interested in how easy the product is to use.
Tip #8 – Understand how users interpret what is presented to them #
Another hindrance to good usability is when users simply fail to understand what is being presented to them, either because it’s visually confusing or because the words or icons used don’t make sense to them. One product I tested talked about mapping field names to columns in a table of data. At least two users were confused by the term “mapping”, expecting some kind of geographical map to be involved.
Test User: “There’s lots on this page.”
Observer: “Which aspects stand out to you?”
Test User: “What does this text mean?”
Observer: “What does it mean to you?”
Tip #9 – Ask the users to rate how easy each task was on a 1-5 scale #
It’s important to get users’ feelings about a task just after they’ve finished experiencing it. This is because a user will tend to take a more moderate view of a past event the longer it has been since it happened. You want to appreciate and record a user’s true frustration or joy at the time.
“On a scale of 1 to 5 where 1 is really difficult and 5 is really easy, how would rate that last task?”
If a low score, establish priorities on what is most important to fix by asking:
“What would make that a 5?”
Tip #10 – Ask the users to rate the overall experience on a 1-5 scale #
You may get some interesting results as users put the whole experience into context. It is perfectly possible that the user has struggled with a couple of the individual tasks, but rates the overall experience positively.
“You’ve done a few different things, some you found easier than others. On a scale of 1 to 5, where 1 is terrible and 5 is excellent, how would you rate the overall experience?”
If a low score, ask:
“What would make that a 5?”
Interpreting the results #
- Identify workflows which score badly (3 or less on scale of 1 to 5)
- Identify recurring problems or user misunderstandings
- Note where the workflow is disrupted, often signified by the user:
- writing down information which is re-entered later on (or copy & paste)
- stopping to read the manual, ask a question, look up on Google
- having to remind self where to go next by clicking around
- repeating actions unnecessarily
- Describe unexpected user actions, odd ways of doing things
Assess whether your users’ needs still match up with the problems you believe your product is meant to solve.
Acting on the results #
- DON’T implement suggestions from users (generally)
- DO be inspired by user suggestions to design a solution to the underlying problem
- ONLY change what doesn’t work, leave the rest alone
- Repeat the testing after each set of changes
- Don’t always use the same testers – fresh eyes are more objective
- Don’t fall into the trap of thinking you know better than the users do
- Be consistent – compare the same tasks each round until the usability improves to an acceptable score
Further reading #
- Jakob Nielsen – Why You Only Need To Test With 5 Users
- Userfocus – Articles & Resources
- Clay Shirky – Meetup’s Dead Simple User Testing
- Kathy Sierra – Featuritis vs. The Happy User Peak
- Scott Sehlhorst – User-Centered Design and Bridging the Canyon of Pain
Get articles when they’re published
My articles get published irregularly (erratically, some might say). Never miss an article again by getting them delivered direct to your inbox as soon as they go live.