Blog

Mind the AI gap: The UX researcher’s role in navigating the AI divide

|

April 24, 2025

Fellow UX researchers, ever feel like you’re caught in the whirlwind of AI hype, but the reality on the ground is… a little different? Let’s talk about the gap—not the clothing store, but the space between the dazzling promises of AI and what it actually delivers, especially in our world of user experience. 

This “mind the gap” warning isn’t just for train platforms. It’s flashing everywhere because big transitions come with risks, and guess what? We’re right in the middle of a massive AI transition. In UX and AI, these gaps are popping up all over the place: 

  • Between what marketing says AI can do and the technical limitations.
  • Between the excitement at the executive level and the nitty-gritty of making it work.
  • Between what users truly need and what current AI systems can provide.
  • Between global AI solutions and the nuances of local cultures.
  • Between the data AI learns from and the diverse real world we operate in.

It can feel like we’re standing on the edge of the known, peering into the AI unknown. 

When AI comes up, you often hear two loud voices. On one side, you’ve got the Boosters, shouting about productivity, cost savings, and how AI will “10x everything!”. Conversely, the Busters are raising the alarm about ethics, risks, and the impact on society. 

But most UX pros are hanging out in the middle. And this isn’t about being indecisive—it’s a strategic sweet spot. By being in the middle, we’re not ignoring the potential or the pitfalls. We’re asking the tougher, more insightful questions and building the crucial bridges. 

Think of it like Google Maps. You need a blue dot—a sense of where you are. Are you excited about AI? Maybe a bit hesitant? Skeptical? Just plain curious? Knowing where you stand helps you figure out where AI fits into your work. What’s real progress, and what’s just noise? And what’s the real cost if we get it wrong? 

Let me give you a concrete example of these gaps in action: language. At Bold Insight, we’ve tested AI translation and transcription tools in 28 languages for UX research. And while the marketing promises “instant translation in any language,” the reality has been… well, messy. 

We’ve seen AI completely botch job titles, flip the entire structure of sentences, and even hilariously (but worryingly) confuse “beside” with “female rabbit”! Imagine building insights on a foundation of errors like that. When these mistakes creep in early, the whole chain of understanding breaks down. 

Take this Arabic example. The AI tool flipped the text direction and jumbled the word order (thanks to an English word thrown in), turning a powerful comment about gendered language in public spaces into complete gibberish. Or consider Afaan Oromo, spoken by millions, but considered a “low-resource” language due to a scarcity of accessible, digitized language data. Even the latest AI models gave wildly different and often incorrect translations for a simple word like “bukke”. Luckily, our human translators in Ethiopia, who we partnered with, explained that “bukke” means “beside” in some dialects, or “sensual organ” in others, i.e., it’s a taboo word. Big mistake if you get it wrong. 

The problem isn’t just that these errors happen. It’s that they compound. Even with decent accuracy at each step of the research process (like capture, transcription, translation, analysis), by the time you get to your precious insights, the accuracy can plummet. It’s like that childhood game of telephone where the original message gets more distorted with every step.

So, what do we, as grounded and critical UX researchers, do about all this? 

We experiment deliberately. 

We build our own internal maps of AI tools, tracking what works for us and what doesn’t. 

We foster a culture of curiosity within our teams, not just reactive acceptance or rejection. 

And we create frameworks – not just gut feelings – to guide *how*, *when*, and *why* we even consider using AI in our work. 

Here are a few ways you can start bridging this gap right now: 

Map the signals: Know where you, your team, and your stakeholders land on the AI spectrum. Start critically consuming information from diverse voices and filter out the hype. Build your own “signal map” of reliable sources. For a “pro move,” consider creating an AI landscape atlas for your organization. 

We worked with the client to use a ‘wizard of oz’ technique, where offscreen researchers roleplayed both AI and human customer service agents for hundreds of participants. By prototyping the experience, we helped the client identify how, where, and when customers wanted to implement this potential solution, thereby avoiding costly missteps.

Create a decision framework: Don’t just jump on the AI bandwagon. Evaluate when AI makes sense based on fit, complexity, and risk. Start by identifying high-value, low-risk opportunities tied to actual business needs. 

Run experiments (a lot!): The gap between AI promises and reality is your playground for learning. Test those shiny claims yourself! Document what works and what fails spectacularly and share your findings openly. Encourage your teams to explore with clear guardrails and create spaces for sharing what they have learned. 

    AI isn’t disappearing. We don’t need to become overnight AI experts. But we need to stay grounded, critical, and creative. By taking the time to understand these gaps and building bridges with intention, we don’t just adapt to the AI revolution – we lead it. We become the strategic thinkers our organizations desperately need right now. 

    So, let’s embrace the discomfort of the unknown, keep asking those tough questions, and chart our own course in this evolving AI landscape. Happy researching!