Reduce Test User No-Shows

You are sitting at your work desk. You feel tense and frustrated. The laptop camera is on. Theoretically, you’re 15 minutes deep in user testing. You’re a moderator, except there is no one to moderate. Staring at your screen, you  recall the prior events in the exact order:

  • “Let's get ready for the next user test”8:45 AM
  • “Do I have water and coffee ready? Oh, the toilet…”8:50 AM
  • “Let’s get in, the test participant might be early.”8:55 AM
  • Twitching your legs, waiting for the participant… 9:00 AM
  • “Could be a now-show, but probably tech issues.”9:02 AM
  • “Should I call? Let’s wait another 5 min.”9:05 AM
  •  Call the participant, nobody answers9:10 AM
  • “OK. It’s a no-show.”  9:15 AM

The frustration boils over you even more at this point. Plus, you have another call in 30 minutes… Hopefully.

Well, conducting usability tests can indeed test your patience. To minimize frustrating occurrences, we bring you six ways to reduce user no-shows. 

Schedule session 3 days prior

Users tend to forget or underestimate the importance of showing up as time passes.

Changing the way you schedule tests can already increase the show-ups.

Booking a test session 10 days upfront is the best way to organize your busy schedule. We understand that.

However, the longer the waiting period, the bigger the chance of a no-show. The recency effect confirms that your users will remember the appointments they scheduled last more clearly.

Therefore, try to schedule the tests 1–3 days in advance.

Still, maybe your tight schedule doesn’t allow a chaotic and fast-paced administration…

If you schedule 10 days prior, at least try to send reminders to your testers. Send a place and time immediately after the booking call, a message a week and a day before, and - most importantly - the day of testing. That should improve the chances of your users showing up.

P.S. Sending them a personal message might be a better option than a scheduled email.

Know your audience

Users behave differently across various fields.

However, if you boil down your audience to a single subgroup, you will notice shared behaviors, interests, and repeating patterns. These will indicate how your audience responds to appointed meetings.

The cancel and no-show rate, in general, will be affected by the following factors:

  • Profession: Are they entrepreneurs, chefs, hairdressers, or software developers?
  • Age: Elderly users may have technical issues, younger users tend to forget
  • Income level: Incentives and compensations should vary across income levels
  • Lifestyle: Are they young parents, adventurers, or stay-at-home folks?

E.g. You could try and schedule tests 10 days upfront with a young entrepreneur only to find out they had a bigger fish to fry. You could as well book a call with a digital nomad just to realize they’re offline most of the time.

One more important component to consider is how you find your participants.

Do you recruit friends, colleagues, or online community members? Or do you recruit from user pools on user testing websites?

To fine-tune your appointment strategy, you have to know your audience. It will increase the likelihood of them showing up for tests.

Give proper incentives

Yes, you should spend more resources. But not necessarily on incentives.

The same way as with surveys, people respond to usability test invites for three reasons:

  • Altruism: wanting to help researchers or to support the findings
  • Incentives: monetary compensation, gifts, lottery, etc.
  • Shared interest: in the topic or preference for research organization

Paying users more money to increase the participation rate doesn’t always go well. It could bring results, but only if you recruit users that have no other reason to show up for tests.

Because money is not the only motive.

A better approach would be to find participants who are likely to be attracted by all three reasons above.

First, try to recruit participants who show altruistic traits, want to help, or have done voluntary work before. Your friends and colleagues also fit within this box.

Research in social psychology confirms that these people will help you more than once if they see it as a rewarding experience. Honest praises of their contribution, or sharing the final design version that came to life after their inputs are likely to result in a long-term collaboration.

As for the incentives, no evidence supports the claim that bigger rewards lead to higher participation. Just try to be fair and compensate everyone equally with hourly pay being the norm. Be cautious before rewarding users based on performance as it might diminish the study’s integrity and go against certain ethical guidelines.

If anything, energetic users or enthusiastic performances should be taken into account the next time you recruit. You compensate them to participate.

If you reward monetarily, you could try paying half of the total amount in advance. While risky and needful of contract, this will give your participants a sense of obligation.

Lastly, the shared interest.

To put it simply. If you are a fan of sci-fi, you probably showed up for the new Matrix 4 movie. If you’re not, you turned your blind eye to it.

Well, consider others to behave the same. Recruit those who share an interest in the topic. They tend to show up.

Before we meet, could you please just…

Ask your participants to sign an NDA before the session. Or better, make them fill out a survey or contact information.

This might come across as counterintuitive. Because, why scare the users away?

Well, the point is to add friction.

If your testers are not willing to sign a paper or answer five questions, they are more likely to never show up. By asking this small favor, you are just double-checking their goodwill to participate. A Sunk Cost Fallacy confirms that your users are more likely to show up if they’ve already invested time in it.

To keep document signing simple and secure, you can use services provided by DocuSign or AdobeSign.

In case you go for a survey, don’t make it too complicated. Up to five easy-to-answer questions are more than enough.

A no-mercy database scrutiny

To run more tests, you need more reliable people. And a database (or a sheet).

The first step is to mark users that didn’t show up for the test session as a no-show. You can also mark those who canceled.

Second, if you get ghosted with a no-show by the same users 1-2 times, it’s best to permanently remove their data from the tables.

Keeping the score will allow you to better optimize your invite process for future tests.

Have users in backup

Regardless of what you do, you can’t avoid no-shows forever.

For each usability test study, you should have backup participants. It’s probably best to have one backup for every fourth participant you invite. That means if you have an above 75% show rate, you are likely safe with a 4:1 ratio.

Backup participants should be your most reliable assets. To not leave them waiting by the desk, make sure you let them know you’ll need them only in the case of a no-show.

Written by

Luciano Kovaevi

Content Writer @ Solid User Tests