When there’s more than one viable design option to consider, comparative usability testing can help you evaluate competing alternatives. This technique is especially useful at the early stages of a design project, because it allows you to explore options, rather than getting locked down into a single approach prematurely.
Everything is multiplied with comparative usability testing: The effort to design and create prototypes, the time to recruit and schedule participants, the work to facilitate the tests, and the amount of information to interpret and communicate. Some big user experience firms can do comparative usability testing with tons of screens, but most of us don’t have the resources to carry out the process at that level.
Fortunately, comparative usability testing can be effective even when done DIY-style. Applied on an appropriate scale, it can be the perfect technique to help you nail a few key features.
Keep it Simple
When working with limited resources, the key to success is to limit the scope of your design and testing.
- One feature or screen at a time. If this is your first time doing this, consider testing just one feature or screen at a time so you don’t get overwhelmed. After a while, you’ll get a sense of how much your team can handle.
- Something with a big impact. You might have time to do this only once or twice in the early stage of your project, so make it count. It only makes sense to test an important feature that will make a significant difference to your users.
- Don’t muddy the waters. For instance, if you’re testing three shopping cart layouts, keep the product details the same so you can focus on what you’re testing: The layout.
- Just a few variations. Again, to keep things simple, you’ll probably want to test just a few design variations. I’d suggest 3 at most for practical DIY-style comparative usability testing.
You’ll want to design scenarios and tasks as you normally would in a usability test, with this important guideline in mind:
- Use the same tasks for every design variation. The tasks for each design option should be identical or as close as possible. If “enter your birthday” causes problems in design A, you will need to know if it’s also problematic in design B.
Interpretation is a bit more complicated with comparative usability testing, and it becomes that much more critical to keep your test results pure, so to speak. To this end:
- Mix the order of the design variations. If design A is always first, couldn’t that affect how people tend to react to design B? Switch the order to level the playing field.
And the following best practices deserve special emphasis:
- Be objective or appear that way. Do you have a horse in this race? If so, avoid comments or non-verbal cues that would influence the participant. Better yet, recuse yourself and have somebody else facilitate the tests.
- Have another person observe the tests. This is a best practice for all usability testing but especially when you are comparing multiple alternatives. Those competing options may have some egos or politics around them, so it’s important to build objectivity into your process by having at least one other person observe the test.
Have Realistic Expectations
After a few tests, one of two things can happen:
- A clear winner emerges. This is less likely the more complicated the features and the testing. It’s not a realistic goal for most tests.
- There’s no clear winner. This is the most likely outcome. You may see strengths and weaknesses in multiple design variations. This is an opportunity to think critically about what happened, why, and consider next steps:
Remember: It’s Worth the Effort
Comparative usability testing can help you discover the advantages of multiple design variations. It can be invaluable at the earlier stages of your project, a way to ensure that you are pursuing the strongest design possible, rather than wasting time “perfecting” something that will never be as good as a superior alternative. Even on a shoestring budget, comparing multiple options up front can help you perfect the features that can make or break your product.
As of now, there isn’t much out there about DIY comparative usability testing (also called “competitive” usability testing). However, there are a few more formal research studies and other resources out there that are relevant and may be interesting to you.
- Parallel & Iterative Design + Competitive Testing = High Usability (Jakob Nielsen)
- A Comparative Usability Evaluation of User Interfaces for Online Product Catalogs (Ewa Callahan and Jürgen Koenemann)
- Competitive Usability Testing for Better Design and Faster Buy-in (Erin Mayhood & Joseph Gilbert)
What kinds of DIY methods have you used/discovered when on a limited design budget?