The value of usability and prototype testing

February 13, 2019   |   UX Design
Nudge Digital
Nudge Digital

The value of usability & prototype user testing really shouldn’t be underestimated – researching and collating feedback are the bedrock to creating a great digital product, and whilst there are reams of valuable approaches to gather initial research (including focus groups, interviews, strategic analysis, etc.), usability & prototype testing should be conducted once a project is underway to further evaluate approaches adopted so far, and justify the decisions that are being made.

  • Why? When it comes to validating the product, user testing is crucial. If the outcome is good or bad, both findings have value. If you find out your site isn’t currently working, you now know you need to change it. If you find that users love it, brilliant! You’ve now got proof. At the end of the day, it’s users that decide; pleasing them is the ultimate test.
  • Who? It’s best to consider the range of your target audience and what you deem will make your site a success when selecting individuals to do your user testing. You need to ensure your KPIs can be achieved through the audience you’re targeting in the first instance, and then recruit participants around this.
  • Where? It’s a great idea to consider the testing location in relation to your user demographics’ context; consider how they would use the product day-to-day and try to reflect this in your study. If the users are likely to favour desktop over any other device, focus your testing in this way.
  • When? Is there a time that is best to test? Throughout the project duration, or when it’s finished? The answer is both, but in slightly different ways. Throughout the project, you’ll be testing to evaluate your site with a view to justifying your decisions and resultant conclusions, but after the site is live you’ll be looking to measure the success of what you’ve created, and to continually improve your proposition. It’s a great idea to test your initial wireframes before implementing designs to avoid your findings having large implications on the development work, and to avoid an instance of work having to be completed twice!

Qualitative vs quantitative

Setting particular tasks or objectives for a user to complete, and assessing across the board who passed, struggled or failed these tasks is a great way to gather quantitative research. Additionally, qualitative feedback is a great way to round off a testing session; asking individuals to comment on their ideas about what works well and what doesn’t, and to give feedback on their experience as a whole. Incorporating both approaches will help give your findings more structure, and enable you to take action points forward more easily.

Attitude vs behaviours

It’s important to consider the difference between what users say they like or dislike, and what they actually do. If a user says something isn’t clear, but still navigates to it very easily, their comments might not provide the most valuable insight into weaknesses of the site.

Results

Unsure of how to interpret feedback, and what to do next? Adopting a Must/Should/Could/Won’t approach is a great way to assess findings into actionable points to take forward. If all users identified an issue, that most certainly should be a ‘must’ in terms of re-thinking current approach.

Alternative methods

Aside from the prototype/usability notes I’ve detailed in this post, there are reams of alternative testing methods out there too – there really is so much that can be done to ensure your digital offering is the best it possibly can be. From A/B multivariate testing, to card sorting, to preference testing; there’s plenty of options to cater for a whole wealth of requirements and research objectives. If you’re unsure of the best approach for your new build or your existing site, speak to the Nudge team so we can advise on what will return the most value for you and your business.