The bold future of UX: How new tech will shape the industry
Part 4 ∙ UX principles for robot design: Have we begun to baseline?
In a previous post, I discussed the challenges of designing a user experience for AI and how it needs three components to truly deliver on the promise of the technology: context, interaction, and trust. These three elements allow for a good user experience with an AI. Today, we’re taking AI to a related area: robotics. A robot is essentially an AI that has been given a corporeal form. But the addition of a physical form, whether or not it’s vaguely humanoid, creates further challenges. How do users properly interact with a fully autonomous mechanical being? Since this fully autonomous mechanical being can, by definition, act on its own, the flipside to this question is just as important, how does a robot interact with the user?
Before we dive into these questions, let’s all get on the same page about what a robot is. A ‘robot’ must be able to perform tasks automatically based on stimulus from either the surrounding environment or another agent (e.g., a person, a pet, another robot, etc.). When people think of robots, they often think of something like Honda’s ASIMO or their more recent line of 3E robots. This definition would also include less conventional robots, such as autonomous vehicles and machines that can perform surgery.
A research team at the University of Salzburg has done extensive research on human-robot interaction by testing a human-sized robot in public in various situations. One finding I found particularly interesting is that people prefer robots that approach from the left or right but not head-on.
In San Francisco, a public-facing robot that works at a café knows to double-check how much coffee is left in the coffee machines and gives each cup of coffee a little swirl before handing to the customer.
While a robot in Austria approaching from the left and a robot in San Francisco swirling a cup of coffee might not seem related, it points to UX principles that should be kept in mind as public-facing robots become more ubiquitous:
- A robot should be aware that it is a robot and take efforts to gain the trust of an untrusting public (evidenced by people’s preferences for robots to not approach head-on and to always remain visible to the user)
- A robot should be designed with the knowledge in mind that people like to anthropomorphize objects (evidenced by people preferring the coffee-serving robot to do the same things a barista might do even if it’s something the robot doesn’t necessarily need to do)
As with all design principles, these are likely to evolve. Once robots become more ubiquitous in our lives and people become accustomed to seeing them everywhere, different preferences for how humans and robots interact may become the norm.
This may already be the case in Japan, where robots have been working in public-facing roles for several years. While anthropomorphic robots are still the dominant type of bot in Japan, there is now a hotel in Tokyo staffed entirely by dinosaur robots. The future is now, and it is a weird and wild place.
What are your thoughts on all of this? Comment below and let’s get a dialogue started!
This blog post is part four of a series, The bold future of UX: How new tech will shape the industry, that discusses future technologies and some of the issues and challenges that will face the user and the UX community. Read Part 1 that discussed Singularity and the associated challenges with UX design , Part 2 which provided an overview of focus areas for AI to be successful , and Part 3 which dug further into the concept of context in AI.