Recently at nForm, we’ve been considering adding unmoderated usability tests to our list of research techniques. While we frequently do both in-person and remote facilitated tests, there are some benefits to unmoderated sessions. This article explores the pros and cons of unmoderated testing and takes a look at some tools that can help.
Benefits of unmoderated testing
- Easier for interested users to participate: The asynchronous nature makes it possible for somebody to participate at anytime, including in the evenings when our facilitators are home. We often have participants cancel or reschedule at the last minute; with unmoderated testing, they can just fit it into their day whenever they have time.
- Cheaper to get a large number of participants: While the cost of testing increases linearly with each participant in any test that requires a facilitator to be present, unmoderated tested does not get more expensive with each participant. Analysis takes the same amount of time whether there are 10 responses to each question of 100.
- Saves facilitator time: Instead of observing each session, facilitators can use that time to analyze results. It also avoids the lost time that occurs while waiting for no-shows.
- Faster to finish studies: With unmoderated testing, it is possible to hit the goal number of participants within a few days. With moderated testing, this would be impossible due to scheduling communication delays and the fact that facilitators can only work so many hours in a day.
- Automated performance stats (depending on tool): While stats such as succeed/fail/abandon rates, average time, and average clicks per task can be calculated manually by the facilitator either during or after the test, having them automated by the system is faster and more accurate.
Disadvantages of unmoderated testing
- Adds guesswork to answering “why?”: The beauty of facilitating a usability test is in the facilitator’s ability to ask mid-task or follow-up questions to clarify a user’s needs, confusion, or interpretation.
- All “wrong” answers appear equal: Being able to observe user as she completes a task allows a facilitator to see how the user felt as she selected her final answer. A user who selects an incorrect area of the site confidently is much different than one who made a selection because he was frustrated and tired of looking, or one who was confused about the task itself or how to interact with the unmoderated tool. Similarly, some users are marked successful only because they made a lucky guess.
- Cannot gather raw user quotes: I’ve often found that direct user quotes can be more convincing and memorable than a pass/fail pie chart. Most tools provide video and audio recordings of the sessions, but I doubt that people would be very chatty when there is nobody listening or responding. Even when sessions are facilitated, it takes skill to get people saying their thoughts out loud. I’d like to be wrong about this, so I’m willing to experiment with audio recording or hear anybody’s experience with audio recordings being useful. Some tools also allow users to submit text in open-ended questions at the end of the test or after every task, but these quotes wouldn’t carry the same weight as a candid remark made while frustrated mid-task.
When unmoderated testing would be most useful
- When curious about multi-click pathways (instead of trying to push impression testing, a method very useful for capturing first-clicks, beyond its abilities)
- When on a tight time-line
- When it is important to include many stakeholders, beyond what time could allow with moderated tests
- When you can get enough participants to A/B test
When not to use unmoderated testing:
- When tasks will vary based on the user (e.g., sometimes we start sessions with an “interview” to get a sense of their needs and past experience, then select from our tasks or even create tasks on the spot based on their answers)
- When it will only be possible to recruit a handful of users (for facilitated testing, usually 5-10 people per user group will do. For unmoderated, probably 20 people would be a minimum, so the trends can add up to give more of a hint to answer “why” questions)
- When you want to test content interpretation as well (unless the tool also allows you to ask text-box questions to quiz after content-related task)
Having not yet tried unmoderated usability tests, I’m still of the opinion that a live facilitated test (whether remote or in-person) still provides more of a “why” than could be expressed through the stats + recordings combo of unmoderated tests. However, I think the benefits to unmoderated tests are fairly compelling for certain projects. I’m also curious about what a combination could look like: run unmoderated tests with the majority of participants, then run two or three users through the tasks with confusing results to uncover that “why?”
I have researched some tools and have summarized the most popular options below. If anybody has experience with these or other tools, I’d love to hear it!
Cost: Unmoderated testing requires an enterprise account. I was quoted at $5400 for 6 months or $8400 for 12 months.
- The UX Suite includes card sorting, impression testing, usability testing, and a survey. Most impressively, it allows you to combine multiple evaluation methods into one test. This sounds quite useful because we’ve been experimenting with more mixed-method testing lately. We occasionally run into the limits of running only one type of test to evaluate too many types of things, but running two separate tests tends to overload participants. (NB: It’s unclear whether they offer only closed card sorts.)
- It lets you ask a question after each task. The question that appears in their sample is, “How difficult was that task?” I’m not sure I see much value in asking that question, but it’s unclear whether you’re allowed to write your own question.
- It gives you code for a pop-up participation invitation on the site you’re testing.
- The analysis tools include click-stream analysis, though it is unclear whether you can look at each user’s path separately.
- While Usability Tools offers visit session recording for regular visits to the site as part of a different suite of tools, it doesn’t seem to offer usability test recording. I could be missing something, since audio/video recording seems to be a fairly standard feature on other sites. I don’t value audio recording too much, but I do think video recording might be nice to review especially on tasks on which participants do particularly poorly.
- In addition to the stats that other sites give, Usability Tools provides maximums and minimums on time-on-task and pageviews, along with most common fail page, abandon page, and first click per task. This is convenient, though it could be easily found using Excel and the exported data from other sites.
Cost: Either $350/project or $410/month ($158/month for smaller companies or non-profits)
- It provides both path analysis (per participant) and a click stream (path aggregate for all users)
- It also lets you ask a question after each task
- In analysis, it allows for easy filtering based on performance (i.e., on time on task or number of clicks), but can’t seem to do any filtering based on questionnaire answers
- Though it has many ways to invite participants (including send out a link, pop-up on website, recruit from pool.), it seems you can only do ONE of these invitation methods.
- Like Usability Tools, Look11 provides maximums and minimums on time-on-task and pageviews, along with most common fail page, abandon page, and first click per task.
Cost: $19k/year for the most basic plan
Other comments: Well, might as well stop there.
Cost: $35/test with bulk pricing available
- It doesn’t seem to provide aggregate stats, which is one of the main points in favour of unmoderated testing.
- It has a session max of 20 minutes.
- Can only ask up to four open-ended questions.
- Unless you buy the enterprise account, it seems that participants come from TryMyUI’s pool, so you couldn’t test with your own users. This would be a major problem for projects about intranets or other secure sites, or for sites that depend on users possessing a certain background or knowledge base.
Cost: $49/test for the first ten tests, $99/test after that
- It has a session max of 15 minutes
- It also only allows you to use their panel of participants except a Pro account level (for which pricing is unavailable)