Bold Insight
Bold Insight

Blog

Word processing vs. world processing: The cultural work AI can’t do

|

December 22, 2025

Two years ago, Bold Insight launched a global study on generative AI and its impact on user experience research. In AI development, two years is a long time. When we reflect on those 2023 findings with most of 2025 in the rearview mirror, after a year packed with AI-related client work, internal initiatives, and public speaking engagements, we see the same pattern. The technical gains have been real, but the cultural gaps we documented then still shape how teams work today.

The core tension is clear: AI systems can produce fluent language, but they do not understand or convey the cultural meaning behind it. Fluency often gives teams a false sense of confidence, yet the systems operate through statistical prediction rather than lived experience. This creates a gap between the understanding users assume and the pattern-matching the model actually performs. Even if tools reach perfect multilingual fluency, fluency alone will not produce cultural competency.

There is a big difference between processing words and interpreting the world those words come from. Word processing predicts the next token. World processing requires awareness of context, history, values, and relationships. Without that depth, culturally rich concepts are reduced to patterns that may look correct on the surface but do not reflect the realities of the people who use these systems. This is where high-quality UX research is essential. We are trained to surface the gap between what a system produces and what users actually experience. It’s our job to catch the moments when fluency masks misunderstanding, when efficiency undermines trust, and when patterns fail to account for lived reality.

Word processing predicts the next token.
World processing requires awareness of context, history, values, and relationships.

It’s our job to catch the moments when fluency masks misunderstanding, when efficiency undermines trust, and when patterns fail to account for lived reality.

We saw this clearly in our collaborative research with Google and Mantaray Africa, presented at EPIC as Decolonizing LLMs: Ethnographic Framework for AI in African Contexts. As part of that project, we worked with researchers in Ethiopia to test how a popular LLM handled folktales. When prompted for a local story, the system produced polished language and a coherent plot, but the central hero, Abba Otho, was entirely invented. The model filled gaps in its training data with familiar Western story patterns. The surface looked correct, but the cultural grounding was totally missing.

This same tension shaped conversations throughout 2025, including the salon on Interactive AI and Cultural Complexity at EPIC 2025 I co-hosted with Anna Metsäranta of Solita. We gathered practitioners from consumer tech, healthcare, financial services, the public sector, and more across multiple regions. Despite their diversity, their experiences pointed in the same direction. Healthcare practitioners described transcription tools that missed dialect nuance, creating safety risks. Those working in financial services noted literacy tools that encoded one cultural view of responsibility while ignoring others. Public-sector leaders shared examples where technically correct translations still damaged trust. The stories varied by role and region, but they highlighted the same issue. AI often handles the words, but it struggles with the world behind them. It is the same problem we have been studying for years at Bold Insight, and it continues to appear wherever these tools are deployed.

Across our work from 2023 to 2025, the pattern has remained consistent. AI systems can produce fluent text, but they cannot interpret cultural meaning with the depth required for high-stakes decisions and interactions. This gap has not closed as quickly as AI influencers and tool builders might want us to believe. It persists across languages, industries, and regions because cultural meaning is built on history, values, and relationships that are only partially reflected in available data and may never be fully representable in ways machines can process.

For UX researchers, the work ahead is clear, and it involves more than checking whether outputs look correct. We need to interpret meaning where the system only processes patterns.

For UX researchers, the work ahead is clear, and it involves more than checking whether outputs look correct. We need to interpret meaning where the system only processes patterns. The goal is to ground innovation in how people actually communicate, interpret, and trust information. Cultural understanding is foundational to whether these systems actually serve the people who use them.