I’ve recently been involved with my first usability session. After seeing 17 users over 4 days being paid for their opinions, I thought I’d tell you what happened – plus some tips/learnings to make sure it’s worthwhile.
What do you test?
To figure this out…
- What are the key metrics you currently measure?
- Are they the right ones?
- What are the key journey/s that influence that metric?
- What is key to improving to your bottom line?
For example, if all you care about is getting customers from the homepage to the product page, to finally buying a product, then that’s pretty simple. But what if the amount of customers returning products is high? What if you have a high drop-out rate on a particular page and you have no idea why? Suddenly things aren’t as simple and you need to delve deeper.
If all we wanted to do was increase customers going from one page to another, simply add a CTA which says “FREE MONEY”. Conversions will increase I promise! 🙂
The questions above you need to ask yourself will shape what you get the users to test.
How do you test?
Are you able to ask customers to test your real website (preferred), or does it need to be a QA/ dev environment? Make sure credentials are set up if needed for the QA environment, and that you test test test before you set customers loose on it to make sure it works. I’d recommend having customers test the real website, so they get the most up to date & working experience. QA might not be a mirrored version, and asking customers to “log in” to your QA environment might put them out of sync.
Do you test on mobile or desktop? What do your customers use?
TIP: Use GA to find out. If it isn’t 90%+ towards one device, then have a couple of the testers test the least-used device, just so you have insights from more than 1 device.
Don’t assume the users will be willing to use their own details! During one of the testing sessions I sat through, 1 user refused to enter any of her details as she was recently a victim of fraud. She was incredibly weary – rightly so. Unfortunately she was too busy reading the fake details (we supplied) to add into the form to give us any useful information on how the form worked.
TIP: When you recruit for the testing, this could be one of the rules – the tester must be happy to use their own details – but any journey that needs a payment made will come from you or the agency. It isn’t fair to ask the user to do this. If they are signing up for a free trial, make sure it’s cancelled after the testing.
If you want users to test your ecommerce journey, how will they do this? What product should they buy or are you happy they go their own way to find a product?
If the user needs to purchase something, how will they pay?
TIP: You could set a fake product that’s £0.10 and have the user use your card to purchase it. Then they get to go through the real experience at no cost to themselves.
Make sure to cancel off any fake orders once the testing is complete and ensure the customer hasn’t been signed up to any newsletter.
Writing a script
A script is what the invigilator (the person who takes the user through the usability testing) uses to know how to lead the conversation; what goals and journeys are required that you have set.
A script might start with asking the user about themselves, to get them feeling relaxed. Ask them about their experience with XZY (XZY = your goals) e.g. do you shop online much? Where do you shop? Which websites do you frequent often/ why/ which ones do you like to use & why?
You could ask them if they’ve heard of your brand.
Ask them to navigate to your website. Ask them for 1st impressions then lean into the goals/journeys you want them to take e.g. “how would you go about finding a dress?”. Ask them if they would trust your brand. Does the user click on the social media buttons? 1 user did during the testing I was part of and said “how important” it was to her that a company posted often (not necessarily had a big following) – just active on social media created a sense the company is still going and therefore more reputable.
If the site is ecommerce, is the delivery/refund page/content important to them? Do they think where it’s positioned is good enough? What are the blockers (friction) to purchasing something?
Ask them to continue through the checkout and speak aloud their thoughts as they do so. Is it confusing? Are you asking too many questions during the checkout? Do you really need the users phone number?
The invigilator will be experienced so may let the user proceed without much talking. This lets the user focus on the task instead of being distracted by questions. They also need to take a step back and not put answers into the users mouths, such as instead of “how would you search for a dress” could be “how would you find a dress” – the former puts the idea into the users head to use the search bar (if there is one).
- Set up a Whatsapp group with the invigilator so you can give suggestions throughout the test, without actually interrupting it.
- Split the testing into 2 separate sessions – a few weeks apart. This is less risky than doing it all on one/two days. If something isn’t working you’ll have time to re-do. It also gives time to obtain feedback from your colleagues who sit in with you on the testing. This is something that was invaluable for us. During the 1st batch of testing, users found obvious things to fix so before the 2nd batch, we fixed them, so during the 2nd batch we didn’t come up against the same issues. We could also then measure those changes and if any user found them difficult or if it helped the journey (it did in one instance!)
- Have extra tasks available -You’ve paid for an hour of the users time – you might as well fill it as much as you can.
How do you recruit users similar to your real-life customers?
Using GA to look at your demographics only gets you so far. You should know who your customers are, or at least know who your preferred target customers are. Write these down as a list. Things such as:
- proficiency using a computer
- MOST IMPORTANT: carried out X task in the last year – this should be the same as your task/journey, e.g. if your goal is selling clothes online, make sure your target customers have purchased clothes online in the last 12 months either on X device (X device being your most popular)
What happens after the testing?
Now the real fun begins!
It depends what you signed up for. We signed up to an agency who afterwards, put together a deck with priority-assigned feedback which they presented to me and my colleagues (including product owners, developers, directors). They also included snippets of videos with each piece of feedback – handy for those who did attend the testing but important for context for those who didn’t.
I had already created a feedback CRO roadmap (including ideas, A/B testing ideas + finished tests) so I added each idea/issue as a row on a google sheet. Each idea was prioritised with the business owner plus using the PIE method: Potential Importance Ease.
PIE – Potential Importance Ease.
How much potential is there for making this change? What’s the traffic like? What’s the potential of improving the conversion rate by X%?
How important is it that you make this change – is it currently stopping users from converting?
Work with the developers to understand how easy or difficult a change is.
I would always recommend you A/B test everything – so you know exactly if a change makes an impact or not.
If you aren’t able to do this, make sure you monitor metrics after the change has been sent live. Depending on traffic, I would do this the following day; a week; a month; 2 months. After 2 months traffic ensures you get at least 1 business cycle within the data. Make sure you compare everything against the comparative date.
TIP: For example, if you send something live on a Monday and then on Tuesday the conversion rate drops, that might not be from your change – it could be a regular Tuesday dip.