On a recent evening, I rushed home from work and realized I forgot to send a necessary text to my partner to get dinner started. I picked up my phone and prompted my voice assistant: “Hey Siri, send a text that says, ‘I’m very hungry,’ with a bunch of exclamation points.”
The assistant dutifully produced and sent the text: “I’m very hungry with a bunch of !”
To rectify this, I tried again: “Just send a text that’s just a bunch of exclamation points.”
The assistant proceeded to send a text that stated: “That’s just a bunch of!”.
I was doubled over in laughter, but the moment clarified a critical shift in user behavior. I was speaking to a legacy voice assistant as I would speak to a large language model (LLM): conversational, context-heavy, and expecting the machine to interpret my intent rather than transcribe my literal command.
This is not just a user error or a funny anecdote. It represents a fundamental retraining of your customer base. Generative AI is rapidly altering how users interact with technology, moving them away from command-based keywords and toward conversational complexity. For executives managing product roadmaps, this growing gap between user expectations and technical reality may result in product abandonment.
From keywords to conversation
For the last decade, users have been trained to speak to machines in narrow queries. If you needed a gift, you searched for “coffee mug.” Users learned to strip away the context, including the occasion, the recipient, and the sentiment to get the best result, ignoring the complexities of contextual tracking that we now prioritize in AI development.
Generative AI has reversed this behavior. We are now seeing a rapid evolution toward “open queries,” where users provide high-level instructions, intent, and nuance, expecting the agent to handle the execution.
We observed the roots of this shift in our UX research several years ago on early chat agents. Even before ChatGPT’s release, users began experimenting with more verbose inputs, seeking a “sweet spot” of information exchange where they could provide enough context to get a specific result without micromanaging the machine.
Today, that behavior has accelerated. Users are no longer searching for keywords; they are describing a scenario, such as “I need a gift for my friend who loves hiking but has everything,” and expecting the technology to fill in the gaps. When they carry this new habit back to legacy interfaces, like my experience with the text message, the friction is immediate (sometimes hilarious) and frustrating.
The context gap
The friction point in my failed text message was a lack of context retention. One of the primary struggles for LLMs, and a massive failure point for legacy tech, is maintaining performance as conversation and context evolve.
Context exists in two forms:
Conversational context: Remembering what we said three prompts ago.
Situational context: Understanding who the recipient is. Does the AI know that a text to a boss requires a different syntax than a text to a family member?
Legacy interfaces force the user to carry the cognitive load of context. Generative AI, when working correctly, carries that load for the user. When a user creates a “sparring partner” relationship with an AI, sometimes spanning hundreds of exchanges over months, they are investing high sunk costs into that context. They are training the tool.
If the tool fails to retain that context, or if a user switches to a platform that cannot handle that level of nuance, the perceived value drops instantly.
Trust is the oil of the AI economy
In our work with global technology platforms, we have found that trust is the “new oil” for AI integration. Trust is not built on a single marketing promise; it is built on a series of successful interactions where the machine proves it understands the user’s intent.
If a user grants an agent access to their personal data, calendar, or communications, they expect a return on that investment in the form of intuitive execution. If the agent fails, as my voice assistant did, the user does not just retry; they lose trust in the agent. And without a foundation of trust, generative AI products become dead on arrival.
Designing for the new mental model
This “exclamation point” error illustrates a larger risk for product leaders. Your users are being retrained by generative AI every day. Their tolerance for “narrow,” keyword-based interactions is shrinking.
We are moving past the novelty phase and into the behavioral expectation phase. We don’t just need to build better tools; we need to understand the human behaviors those tools are shaping. If your interface requires your user to “downshift” their intelligence to be understood, they will eventually stop speaking to it altogether.
Working on something AI-powered? We’ve helped healthcare companies, government agencies, global tech brands, and startups make smarter design decisions grounded in real human behavior. If you need support building and maintaining user trust, we’d love to help. Learn more about our AI research.
