Bold Insight launched a global AI study in March 2023 that included:
- 24 multilingual interviews conducted with UX researchers in 19 countries, spanning 22 languages.
- 27 interviews in English with global UX researchers representing 24 countries.
Our global study led us to quickly recognize that the performance of AI translation tools across multiple languages is highly imbalanced. Automated tools, although continuously advancing, may struggle to understand linguistic subtleties, idiomatic expressions, and cultural context. And despite the hype surrounding generative AI, frictions and pain points remain across both common and less-common languages alike. These are all liabilities when conducting high-quality global UX research.
UX researchers should consider these issues when embarking on multilingual research. In this blog, we’ll examine three challenges AI translation tools.
Challenge #1: Capturing naturalness, nuance, and context
Interpreters and researchers are impressed by AI-assisted tools, specifically their rate of speed and potential to capture content completely. But AI programs tend to “miss the mood” (that is, nuance and context) of natural speech, which can result in “borderline nonsensical” output in crucial moments.
A great example of this issue was recently published in Wired. The work of professional interpreters was compared to that of an AI speech translator named Kudo. The task? Translate a 2020 speech about COVID-19 delivered by King Felipe of Spain, the accuracy of which depended on the clear translation of emotion.
The result: the king’s words were meant to recognize “tremendous physical and emotional burden” healthcare workers were experiencing during the pandemic. Kudo, however, translated the king’s speech as the “great emotional charge and physics on [healthcare workers’] backs.”
In addition to more accurately navigating and documenting the intricacies of human emotions, humans may still be the preferred choice for research-level precision when working with languages such as Hindi, [Bahasa] Indonesian, and Swahili, on which AI-assisted tools have traditionally performed poorly.
But as detailed by one Brazilian Portuguese speaker surveyed during our global study, there are real costs – in terms of both money and time – that come with human transcription: “[w]e have a transcription vendor that makes all of the IDI (in-depth interview) transcripts,” they said. “After fieldwork, my operations manager sends all the videos to the vendor and they return the transcripts to us after 5-7 business days. This is a time-consuming process, and it is one of the highest costs in our projects.”
The difficulty of capturing naturalness, nuance, and context is only one element that hinders the qualified translation critical to planning and completing effective global research.
Challenge #2: Capturing accents and dialects
Most of the world’s 7000+ documented languages fall on the under-resourced side, meaning that they lack the large, labeled corpora typically used to develop natural-language processing (NLP) systems. Resource richness is relative and constantly changing – especially in the face of recent research innovations – but in general, there are high-resource and low-resource languages:
- High-resource languages are those with large amounts of available data, extensive linguistic research, and a high number and kind of well-trained NLP models.
- Low-resource languages are those with less digital presence, fewer written resources, and (often) fewer speakers.
The languages with lowest resources are less intelligible by AI-enabled tools. In fact, many local and regional dialects (e.g., Shanghainese of Kansai-ben) are unintelligible by most forms of speech tech. The logic goes that because AI language tools are trained on a corpus of labeled data made available to them, languages with the least amount of available data are a no-go with automated tools. This situation creates a worrying “digital divide,” as research and tools focus disproportionately on English and other high-resource languages.
And while there are multiple underlying factors, accented speech in widely spoken languages like English can pose significant challenges for speech recognition technology too. Take, for instance, a conversation in American English between a native English speaker from the United States and a Yoruba speaker from Nigeria. If the Yoruba speaker has a strong accent, many of today’s AI tools might struggle to capture the dialogue accurately. The core issue is not a fundamental flaw in the AI’s understanding of language but rather a limitation in its exposure: many AI systems have not been trained on a diverse range of accents. Therefore, when they encounter unfamiliar phonetic patterns, these systems might not recognize them as effectively. Essentially, for optimal performance, AI tools need to be exposed to the vast variety of ways people pronounce words in real-world scenarios
In the words of Mady Mantha, a product and machine learning (ML) engineer specializing in conversational AI and NLP, “AI and speech technology can perpetuate and even amplify biases present in the data used to train them. And then there’s the problem of adapting to different accents and dialects, as AI and speech recognition systems are often trained on a specific accent or dialect.”
To continue to improve AI translation tools, says Neil Sahota, CEO of ACSI Labs and AI adviser to the United Nations, “[w]e need to move toward precision speech recognition, much like we’re leveraging AI to move into precision medicine. Each person has a unique accent and way of pronouncing words. This is beyond regional dialects and local slang. Until machines understand how each person talks, we’ve still got work to do.”
Challenge #3: Navigating data privacy
The last – but certainly not the least – of the translation difficulties facing UX researchers engaged in global projects is how to ensure privacy in data-training sets.
In short: AI tools are captivating, but nothing is free. And as with other commercial products, if you’re not paying money to use a tool, you’re most likely paying with your data
Before adopting any translation tool, ask yourself: where’s your data going? Do you know? Are you comfortable with that answer?
And more to the point: what requirements and needs are your clients or partners working within? How does that impact whether this AI tool is a viable option?
Exhibiting care for your clients and partners – and the UX research craft – is the key to guiding effective research.
Demonstrate care by ensuring quality and security
In the end, in addition to delivering quality insights, researchers should ensure that quality and security aren’t lost in the rush to embrace new tech that promises to turbocharge productivity and efficiency.
The onus is on researchers to marry the efficiency of technology with the precision and empathy inherent to human understanding. By considering the unique requirements of each project and being discerning about the tools they employ, we can prioritize both quality and security.
The future of AI in UX
We’ll be sharing our research from our global AI study, including more of what we learned about AI translation tools from UX research experts across the globe, at the UXMasterclass on September 28 in Zaragoza, Spain. Visit the conference website to register and learn more.