A few years ago, Google CEO Sundar Pichai said he believed artificial intelligence (AI) would be “more profound than … fire.” And that was before generative AI entered the mainstream.
Today, new generative AI tools like ChatGPT, Bard, and Midjourney are poised to transform virtually any job that involves language-based tasks – including user experience (UX) research. And while the jury’s still out on Pichai’s prediction, one thing is already clear: just like fire, generative AI has both positive and destructive potential.
What does this mean for UX researchers? That’s one question we asked as part of our global AI study.
In March 2023, we interviewed 50 UX researchers (across 24 countries and 17 languages) about the risks and rewards they expect from generative AI. Here, we’ll explain some of our findings – and how our study aims to concretely measure AI’s impact on UX research.
Generative AI can speed up repetitive ResearchOps tasks
To understand how generative AI can help UX researchers, it’s important to break down the kinds of tasks we perform each day. Consider how much time you spend:
- Planning and organizing research studies.
- Writing participant screeners and discussion guides.
- Learning about the devices, apps, and services you’re testing.
- Conducting global fieldwork.
- Analyzing study results (e.g., via coding, translation, synthesis, summarization, and pulling quotes).
- Writing up findings and recommendations.
- Presenting your research to stakeholders.
Many of these tasks involve serious, higher-order thinking – say, to uncover users’ assumptions during a research session. But a lot also fall under ResearchOps: the kind of repetitive work that keeps the gears turning on every study, project, and engagement.
This is where generative AI likely holds the most potential.
According to the 50 UX researchers we interviewed, current generative AI tools could be useful when it comes to:
- Content refining and enhancement. AI can help hone the content you’ve already created (editing, proofreading, organize concepts). Generally, AI can help push the needle of your creativity forward.
- Data cleaning and analysis. AI can help you clean and code large volumes of data. It can also help you analyze, model, and even visualize data.
- Transcription and translation. AI can convert audio recordings into text and translate across many languages with increasing precision. (We’re currently finishing up blogs that deep dive into the potentials and challenges of speech tech! Stay tuned!)
- Process optimization. AI can streamline tedious processes, like using AI as a launching pad for an event (i.e., asking it to give a play by play of how to setup an event, plan roles, agenda, etc.) or take on repetitive project tasks like automating data transfer from spreadsheets to presentations or writing code and formulas.
Generative AI can execute an array of tasks in seconds. That’s a huge time-saver at scale – and it’s the biggest advantage the UX researchers who participated in our study expected from AI-supported UX research.
But as with any new technology, there are substantial risks. We’ll dive into those next.
Generative AI isn’t trustworthy (yet)
Generative AI has a lot of potential. But right now, it hallucinates and because it hallucinates people shouldn’t trust the outputs (yet).
Each AI-generated output is essentially probabilistic: it’s a composite of the words (or numbers or pixels) most likely to match a query’s intent based on the model’s training data. And in practice, that impacts generative AI’s ability to:
- Ensure accuracy. Current generative AI tools are prone to fabricating (or “hallucinating”) anything from sources and statistics to historical events. What’s more, this information often sounds deceptively convincing, putting the onus on researchers to separate fact from fiction.
- Mitigate bias. Like its predecessors, generative AI is trained on data that’s often biased against marginalized groups. This can seriously affect the quality of each output. Consider, for instance, ChatGPT responses that perpetuate racial and gender stereotypes – a massive red flag for responsible UX researchers.
- Execute higher-order research tasks. Right now, generative AI’s probabilistic logic limits its ability to thoroughly analyze and synthesize inputs. For example, it can quickly summarize an interview transcript, but it might omit a phrase from a participant that could prove compelling to a skilled human researcher.
Our global study found that UX researchers are already concerned about these problems. They’re also worried about bigger-picture issues, like AI’s ability to protect user data or the potential for job displacement. These concerns feed into researchers’ overall perception of AI: most of our study participants have mixed feelings.
What does all of this mean for UX researchers? You can probably use a generative AI tool to help you brainstorm a desk research project, for instance. Treat this as a first draft; don’t take its output at face value. Then, if need be, push the AI to explain its output, like why it phrased Question X a certain way.
Those benefits aside, though, it’s likely too early to trust AI to effectively execute higher-order research tasks or delicately handle proprietary client data. For now, at least, that’s where human involvement still matters most.
Coming soon: A research-backed measure of the risks and rewards
So far, we’ve shared some of our data-backed predictions about generative AI’s impact on UX research. But how can we concretely measure the extent to which AI supports our work?
We’re thinking hard about that question. And, using the data from our global AI study, we’ve been experimenting with AI-assisted tools to…
- Transcribe audio recordings.
- Translate multilingual sessions into English.
- Take notes.
- Summarize findings from individual research sessions.
- Analyze data across participants for themes and findings.
In some of these areas, we’re comparing AI-generated outputs to the way human researchers executed the same tasks.
We’ll be presenting this research on September 28th at the UX Masterclass in Zaragoza, Spain. Check out the whole program by visiting https://www.uxmasterclass.com/conference.
Generative AI has a ways to go – but value abounds
Generative AI isn’t perfect. But that doesn’t mean researchers can’t find value in using it today.
Think of AI as a sparring partner. Push it during every interaction to learn more about its process and refine your own, from the way you communicate to the way you interact with data.
And in the long term, expect AI to get better at executing your most repetitive research tasks – giving you more time and energy to perform higher-order analysis.
Want to know more about our thoughts on AI-powered research? Drop us a line – let’s start a conversation.