The challenges of remote testing
In this exclusive book extract of Remote Research Nate Bolt and Tony Tulathimutte review the major challenges we’ll face when planning, conducting, and presenting a remote research study
This excerpt is Chapter 10 of Remote Research by Nate Bolt and Tony Tulathimutte. For remote user research tools and detailed articles, also see remoteresear.ch.
You’ve seen how remote research deals with the problems of traditional in-person research (geographical distance, recruiting, task validity, etc), but it raises plenty of its own problems, too. We’d like to wind down this discussion with a review of the biggest challenges of adopting remote research methods: the doubts, concerns, and pains in the neck that seem to come up in study after study, even for seasoned practitioners.
Legitimacy
Remote research is still in its adolescence, and skeptical prospective clients often ask us, “Who else does remote research? If it’s so great, why haven’t I heard of it?”
As we mentioned at the beginning, lab research has run the show for a long time mostly because that’s the way things have always been done. In spite of this, plenty of big-name corporations have happily taken the plunge with remote research, including (just from personal experience) Sony, Autodesk, Greenpeace, AAA, HP, Genentech, Citibank, Wikipedia, UCSF Medical Center, the Washington Post, Esurance, Princess Cruises, Hallmark, Oracle, Blue Shield of California, Dolby, and the California Health Care Foundation, to name but a scant few. Automated tool sites boast Sony Ericsson, Motorola, YouTube, REI, eBay, Cisco, Bath & Body Works, Orbitz, Hyundai, and Continental Airlines as customers.
If you’re still not sure about it, we recommend looking through an exhaustively documented study (complete with full-session videos and highlight clips) of first-time Wikipedia editors that we conducted for the Wikipedia Usability Initiative project.
It includes both lab and remote sessions with identical goals, so it’s a good comparative case study. If you have any reservations, you should watch the sessions and decide for yourself.
Not seeing the users’ faces
We have always been confident that seeing a user’s face isn’t necessary for gleaning insight out of a user study, but clients and stakeholders and some UX researchers can get very persnickety about this issue. If a person isn’t physically present and being videotaped or sitting behind glass, they wonder, “How can you really research them?” Or “How can you develop empathy for someone you can’t see?”
Get the Creative Bloq Newsletter
Daily design news, reviews, how-tos and more, as picked by the editors.
Our firm belief is that onscreen user behavior and think-aloud comments, and not users’ facial expressions, provide the real value of the study because you want to learn what the users do and are able to do on the site, not how they feel about it. Even if we concede that participants’ emotional responses can bring in valuable insights about how they use a site, you’d be surprised at how much feeling can be just as effectively inferred from vocal tone, sighs, pauses, inflections, and interjections (“Ah-ha!” “Oh, man!”), not to mention the content of what they’re saying. Most people are veteran telephone users and have learned by now how to express themselves vocally.
Maybe in a few years, video chat will be commonplace, and not seeing the users’ faces probably won’t even be an issue anymore. For now, however, rest assured that not seeing the user’s face just isn’t that big a deal.
Technology failures
Moderated remote research uses lots of separate technological components, any of which can malfunction for many reasons: a computer with multiple programs running on it, a microphone headset, an Internet connection, Web recruiting tools, third-party screen sharing solutions, recording software, two phone lines, IM clients, and so on. Then there are all the things that can go wrong with the users’ computer and phone setup. Users can be on a wireless connection, an unstable wired connection, or a cell phone; international phone lines can be muddy; their computers might not be able to install or run screen sharing.
One or two things going awry amounts to annoying delays: interruptions to an ongoing study, glitches in the recordings, difficulty hearing users, and so on. At its worst, having two or three or all these things fail can stop a study cold until the problems are resolved.
UX researchers aren’t necessarily tech experts, so if you want to stave off these problems, the best thing to do is test everything at least a day prior to the start of the study, referring to a checklist. Here is a starter checklist for you. Modify it to suit the tools you use to conduct your research.
Screen sharing is interrupted/malfunctions | Check to see whether your internet connection is stable. Check to see whether your user's internet connection is stable; if possible have him/her switch to a wired connection. If it's still not working, try a different screen sharing tool. |
Recordings come out corrupted/glitchy/truncated | Test the recording tool. If test recordings don't work, check recorder settings to ensure that they are recording to the correct format and quality. If test recordings work fine, most likely the computer was running too many processes during the recording. Close down unnecessary programs to fix this problem; if it persists, you may need to upgrade your computer with more RAM. Also check whether you have sufficient hard drive space to store the recordings. For corrupted files, use a video editing program or converter to attempt to convert the file to a different format. For certain file formats, there are also utilities that are able to fix minor problems. |
Phone connection malfunctions | Check your phone connection. Use an alternate phone line, if one is available. Ask users if they're using a cell phone. Ask if there is an alternate line to call. Ask users whether you can call back on a different line, at another time if necessary. |
Microphone headset/sound input malfunction | Check whether the headset is muted. Check the mic input volume in system settings. If you're using a VOIP service like Skype, check the software settings to see if it's not muted. |
Internet connection seems choppy or breaks | The problem could either be your connection or the user's. If it's yours, postpone the study and switch to any alternate internet connections you may have in your office. As a last resort, call your internet service provider and see if the service has gone down. If it's the user's connection, ask the user if he/she is on a wireless connection; if so, ask if he/she is able to switch to a wired connection. If that doesn't work, attempt to reschedule the study to a time when the user will be at a different computer. |
User's firewall does not permit the screen sharing tool to function | Switch to an alternate, preferably browser-based screen sharing solution. If no available solutions, attempt to reschedule the study to a time when the user will be at a different computer. |
Recordings have no sound | Check the system sound input volume and settings (make sure it's not muted) and recording software settings. |
Regardless of what happens, stay calm. The absolute best way to handle technical problems is to set everyone’s expectations ahead of time (yours, your team’s, and those of anyone who’s observing) that there’s always a chance issues will come up, and that it’s a normal part of the process. Make sure observers have their laptops or some poetry to read so they don’t sit around idly when a user’s cell phone dies.
In spite of your planning, it’s always stressful when you have observers watching you, a live participant waiting on the other line, and a stupid technological problem interrupts everything, even though you’re positive you tested it like a million times. Take a few seconds to step back and put it in perspective: life goes on. A hard-breathing, hyperthyroidal moderator will spoil a session even if all the technology starts working again.
Not as inexpensive as you’d think
Remote research is often represented as a discount method, a way of shaving costs, and people are often surprised to find that the cost of a remote moderated study is usually comparable to its in-person equivalent. Remote research can help save on travel, recruiting, and lab rental costs, but where moderator time, participant incentives, and scheduling are concerned, nothing is much different. Most of the expense of a research project is the research – having a trained researcher take the time to observe and analyse users’ behaviors carefully and then synthesise the findings into smart and meaningful recommendations. Don’t let the stakeholders of the study fall under the impression that the primary motive behind a remote study is the cost savings: the real benefit, again, is its ability to conduct time-aware research.
Organisational challenges of web recruiting
Most web recruiting tools require you to place a few lines of external code in the website’s source code. If you have a personal website or work for a small, scrappy start-up and have direct access to the code, this task shouldn’t be difficult. If, on the other hand, you’re dealing with a huge company with a complex content management system, you may have to prepare for red tape. You’ll have to cooperate with the IT operations guys and higher-up managers who have the final say as to what goes on the website. Be sure you have answers to the following questions:
- What are the changes we need to make to the code?
- What does the code do? Is it secure?
- What pages does the code need to go on? Will it work with our Content Management System?
- Which pages will the screener appear on?
- How long will the recruiting code be active?
- What will the screener look like to visitors?
- How many people will see it?
- How can the managers/IT people shut it off or disable it on their end?
- Will the look and feel of the screener match the Web site’s look and feel?
The answers to all these questions depend on the tool you’re using to recruit. Come prepared with the answers to these questions before meeting with your IT people/managers to prevent delays and confusion in getting the screener up.
Getting the right recruits
Taking matters into your own hands with live recruiting on the Web is often cheaper, faster, and more dependable for remote research, but it means that you’ll have to bear more responsibility for recruiting your participants properly. For any number of reasons, getting enough recruits to conduct steady back-to- back sessions may not be easy for you. See this table for reasons why.
Your website's traffic volume isn't high enough to bring in six qualified recruits an hour | Increase the screener display rate if it's below 100%. Place the screener on multiple pages or a higher-level page in the IA. Schedule qualified recruits in advance to supplement the users you're able to obtain. Increase the incentive, but not too high (or else you attract more fakers). Lengthen the duration of the study (with healthy traffic, it's possible to do about six users in a work day. |
Your recruiting criteria are too strict | If you're filtering your results, disable the filter to see if any of the filtered recruits are acceptable participants. Ask stakeholders if any recruiting criteria are negotiable and relax the lowest-priority ones. Increase the incentive. |
The wording or length of your recruiting screener turns people off | Revise the wording to feel less like a deal or an offer. Omit needless words and questions. Be specific about the incentive. |
Fakers are filling out your recruiting form | Review the "Why did you come to this site?" responses to determine whether the fakers were referred by a deals/bargain site. Add sneaky questions to the screener to trick fakers into tipping their hand. Add open-ended questions that can be answered plausibly only by your legitimate recruiting audience. |
Natural user behaviour
Moderated remote research is great for watching users perform natural, self-directed tasks on their own initiative, but that kind of behavior isn’t a given. Some users who participate in a study have preconceived notions about what’s expected of them during a study. Either they’ll tell you what you want to hear, or they’ll be too critical. Some will ask: “So, what do you want me to do?” At every turn, you should encourage users to do what they would naturally do, only adding that you may have to move things along to keep to the time limit of the session. (This is a polite way of warning them that you might cut them off if they start meandering.)
When users get absorbed in their tasks, they may stop thinking aloud. That’s not necessarily a bad thing, depending on how clear their motivations are. Usually, you can keep users talkative with a few encouraging reminders to “keep saying whatever’s going through your head about what you’re doing.” Naturally quiet or shy users might need more explicit prompts, with extra acknowledgment of how awkward it is to think aloud: “I know it’s kind of odd to talk constantly while you’re browsing, but everything you have to say is really useful to us, so don’t hold back.”
Then again, sometimes it’s not the users who have problems with natural behavior, but the stakeholders. For an outside observer who’s accustomed to heavily scripted and controlled lab testing approaches (“Now do Task A... Now do Task B...”), it can be jarring to watch participants use the interface the way they normally would. Observing natural behavior often means allowing users to go off on digressions or to allow for long silences while users try to figure something out or to perform tasks that don’t appear to relate to the scripted tasks.
You need to set your stakeholders’ expectations. What may seem aimless and chaotic is actually rich, properly contextualised interaction that they should pay close attention to. Put it this way: when you go to a Web site, do you close down all your other applications and browser tabs, turn off your mobile, stick to one focused task, and tell the kids and dog to be quiet? And even if you do, is anyone ordering you to do those things? You need to assure stakeholders that regardless of whatever unanticipated tasks the users perform, the moderator will see to it that the users also perform the core, necessary tasks.
But there are some cases in which users really are too distracted to be paying any attention to what they claim to be doing. If they’re simply veering off-track, you may either need to reschedule the session when it’s less hectic for these users or dismiss them. That decision is up to the discretion of the moderator, but it’s usually pretty obvious. Whether users listen and respond to what the moderator says is often a good indicator.
Multitasking
It’s tough to appreciate, without doing a few sessions, how much stuff you have to keep your eye on while moderating a remote session: your conversation with the user, the user’s onscreen behavior, observer questions and comments via IM, your notes, the time remaining in the session, your place in the facilitator guide, and occasionally the status of the recording tool. You also have to exude an aura of serenity; you can’t even sound as though you’re trying.
The main thing is practice, practice, practice. Find willing volunteers to participate in dry runs. Watch and learn from recorded past sessions – like our sessions from the Wikipedia Usability Initiative, for example.
Security and confidentiality
Finally, there are the challenges of testing interfaces that need to be presented to users securely. These interfaces can’t be installed on users’ computers or placed live on the site, usually because they’re prototypes that aren’t ready for public exposure. Password-protected access to the site is the most preferable option, but in cases in which no files can be moved to users’ computers, you should use the reverse screen sharing techniques described in Chapter 9, making sure that the Internet connection is fast enough to support a natural interaction.
Persistent negativity
Sometimes, for no particular reason, you’ll have stakeholders or team members who think remote research is a horrible and stupid idea. This opinion doesn’t make them bad people. Even after a great study, there will sometimes be criticisms of some of the methods and details. The reason is largely that most people aren’t familiar with remote research yet and don’t know what a successful session looks like. They’ll get freaked out about the moderator not assigning specific tasks, about having to wait 20 minutes to find a qualified user to live recruit, about the lack of active listening, or about any of the other things that are supposed to happen. And then there are die-hard skeptics, who won’t like what they see no matter what.
The best remedy for dealing with these situations is to deliver amazingly successful findings, which exceed the usual expectations of incremental usability fixes. Of course, doing that is not easy, but in spite of anyone’s doubts about the process, if you think hard about your users’ behavior in context of their real lives and then come up with insights that double the conversion rate or dramatically increase the ease of use of your interface, the naysayers will be turned.
Intercept live visitors to your site, ask them questions, and call them: use ethn.io, created by Nate's firm Bolt | Peters, and get a 20% discount with the code DOTNET.
Thank you for reading 5 articles this month* Join now for unlimited access
Enjoy your first month for just £1 / $1 / €1
*Read 5 free articles per month without a subscription
Join now for unlimited access
Try first month for just £1 / $1 / €1
The Creative Bloq team is made up of a group of design fans, and has changed and evolved since Creative Bloq began back in 2012. The current website team consists of eight full-time members of staff: Editor Georgia Coggan, Deputy Editor Rosie Hilder, Ecommerce Editor Beren Neale, Senior News Editor Daniel Piper, Editor, Digital Art and 3D Ian Dean, Tech Reviews Editor Erlingur Einarsson and Ecommerce Writer Beth Nicholls and Staff Writer Natalie Fear, as well as a roster of freelancers from around the world. The 3D World and ImagineFX magazine teams also pitch in, ensuring that content from 3D World and ImagineFX is represented on Creative Bloq.
Related articles
- Upwork's new identity shows the importance of motion in modern branding
- Newly released Apple Developer videos on YouTube are a treasure trove of design insights
- "Empathy is absolutely crucial for success": a day in the life of UX director Sam Drury
- I just discovered Web Design Museum and got lost for two hours