The global study
In March 2023, we quickly mobilized our vast global network to explore the ethical and practical considerations surrounding human-machine collaborations across cultures. We launched a 24-country research program that captured the stories of 50 UX researchers – junior researchers, UX agency founders, and everything in between – on how AI tools might change the way UX researchers live and work. We asked participants about their relationship with the internet and technology, their feelings toward AI and experiences with emerging technology products, and their thoughts on the future of AI in their careers and everyday lives.
With this initial dataset and subsequent follow-up interviews, we’re exploring the exciting range of potential collaborations between researchers and AI, and the benefits and drawbacks of such partnerships. Casting aside the limited “researcher vs. AI” mentality, we’re experimenting with different strategies for achieving outcomes with generative AI that are transparent, trustworthy, and reliable.
In parallel, we’re refining our techniques as researchers to expertly capture data and extract it without bias. We’re taking in all the cool, new possibilities in real-time alongside their ramifications for clients and society. We’re also working across our diverse team to surface critical questions and experiment with AI tools for basic tasks like transcription and notetaking.
At the highest level, we’re reinforcing our understanding of the unique value of AI and the challenges associated with integrating it into user experiences effectively. AI needs to be more than technology—the prescription for success is to deliver a great user experience, after all, not just embed AI into a product.
What did we find?
One practical insight we’ve already uncovered from this research:
Working with the “new” AI makes us better researchers through dynamic interaction (not machine output).
The output of well-known large language models (LLMs) can’t be trusted, and we can’t entrust them with client data. “Garbage in, garbage out” applies here, and so does “gold in, garbage out.” Much of the value for responsible researchers lies in the interaction. Think of it like a fencing match. Instead of seeking that (mythical) magic command or perfect output, embrace AI as your sparring partner. Touché.
Experiment with different kinds of prompts and observe the AI’s responses. Engage the model in dialogue: ask it challenging questions and demand step-by-step explanations to find its sweet spots and edges. Better yet, turn the tables. Encourage the AI to question you, give you feedback, and trigger your curiosity and innovativeness. Your thinking matters most.
Interactions with AI “sparring partners” can push us to refine and clarify research objectives. They force us to communicate clearly and effectively, both with prompts and with our teams and clients. They can spark our creativity and strategic thinking. Ultimately, they can lead us to validate, iterate, and engage more with the data – all good things!
In the coming weeks and months, we’ll be sharing more insights from this research program. Exploring AI’s current lightbulb moment from multiple angles and vantage points, we aim to contribute thoughtful, collaborative (global), evidence-driven insights to help our community and clients develop the UX of AI effectively and responsibly. Stay tuned!
Our team has been focused on the symbiotic relationship between user experience (UX) of artificial intelligence (AI) for years. We predicted that context, interaction, and trust would be prerequisites for AI adoption and two of our founding partners published a book on AI and UX based on early work explorations of AI in technology, electronics, and healthcare. Since then, our AI work has been ongoing, guided by the needs and ambitions of clients representing a range of industries.
Have a research question that involves AI? Let’s talk!