Research shows big payoff when design includes voice of the user

Research shows big payoff when design includes voice of the user

U

BOLD INSIGHT

A recent McKinsey report outlines the ROI of UX research, offering a business case to invest in building designs centered around the user.

Arriving at a fantastic design rarely happens by magic or luck. More typically, it occurs through a great deal of work … including revisions, iterations, and seeking out the voice of the user. We know this. It makes sense. Yet, stakeholders still cut design cycles short because they can’t be sure that one more iteration, or that running a user study, will have a positive return on investment. Being in the field of user experience research, we know the positive impact that user experience (UX) research has on product design, but it can be challenging to convince a board room to invest. But now we have evidence! McKinsey recently released a report showing the business benefits of design.

The report found a robust correlation between companies that are strong in design and the financial performance of each company. These results held true across the industries they studied, covering physical products, digital products, and services. As you read into the report, UX research was at the heart of what put companies in the top quartile.

The report goes on…

“The importance of user-centricity, demands a broad-based view of where design can make a difference … In practice, this often means mapping a customer journey (pain points and potential sources of delight) rather than starting with “copy and paste” technical specs from the last product. This design approach requires solid customer insights gathered firsthand by observing and—more importantly—understanding the underlying needs of potential users in their own environments. … Yet only around 50 percent of the companies we surveyed conducted user research before generating their first design ideas or specifications…

And on…

… many companies have been slow to catch up. Over 40 percent of the companies surveyed still aren’t talking to their end users during development. … With no clear way to link design to business health, senior leaders are often reluctant to divert scarce resources to design functions. That is problematic because many of the key drivers of the strong and consistent design environment identified in our research call for company-level decisions and investments.

In our experience, designers are amazed by the many ways consumers interpret their designs. Knowledge of consumer interpretations empowers designers to improve their designs in subsequent iterations. Thus, design benefits from the voice and interpretation of the people who will adopt it.

Plain and simple, companies that invest in user experience research perform better financially than those that don’t. This data has been anecdotal in the past, but McKinsey’s report should serve as the foundation for any UX manager trying to build a business case for UX.

UX and sustainability: What is our role as researchers?

UX and sustainability: What is our role as researchers?

U

BOLD INSIGHT

As UX researchers and designers, we should consider the entire product lifecycle, including disposal. If possible, also thinking ahead to future product iterations and how product replacement will be handled.

Earlier this year, I was a part of a formative study for a pharmaceutical manufacturer to investigate patients’ opinions about a new liquid medication delivery device. It was a daily dose that came in one-time-use bottles made of plastic, meaning patients would have to dispose of approximately 30 bottles per month. One of the unexpected insights we gathered from this study was that participants were highly concerned about the waste involved and almost unanimously voiced the desire for the bottle to be reusable, recyclable, or smaller at the very least.

This is an interesting first-hand example of sustainability becoming a factor in user experience (UX). Users often want the latest technology, not only because the capabilities of new devices are helpful in critical ways, but also because our society sees newness as fashionable. And design is often structured around that concept. For example, new software often can’t be run on old devices, requiring the purchase of new hardware – and resulting in a continuous cycle of disposal. This frequent disposal of goods has a significant impact on the environment, and technology may have further environmental effects in the form of energy-demanding websites and electronic waste. So how do we as UX professionals consider the needs and desires of users while also factoring in the broader human value of environmental sustainability? Should that be part of our responsibility?

Design with sustainability in mind from day one

Some experts, including the reputable Don Norman, suggest that designers should consider the full life-cycle of a product as they build it, with the end goal of creating well-cared-for systems. This kind of design not only has a positive impact on the environment, but it can often ultimately improve the user experience (as noted in the example above) and therefore contribute to commercial success.

There are some actionable ways that sustainability can be included in design. For one, product innovators and designers can think ahead to the disposal of the product. How will users get rid of the product when they no longer need or want it? Furthermore, does the invention of this product displace an older model, and what impact will the disposal of the older version have on the consumer? On the environment? Considering these concepts early in the innovation cycle means avoiding greater problems later down the line.

Adding options to reduce waste

On the other end, designers can also look at the ramifications of behavior induced by a certain product or system. For example, food delivery apps might promote waste because of the large amount of plastic and packaging that are often inherently involved in the process. Building in options that provide the user opportunities to reduce waste or save energy is a simple way that designers can promote sustainability and allow users to make more informed decisions about their consumption, which at least for some users, elevates their overall experience with a product.

As UX researchers, we are responsible for investigating what is important to users in order to improve their overall experience with products and services. But humans are complex, and their desires are not always straightforward. Sometimes, the insights we uncover remind us that technology does not exist in a bubble, it exists in societies with values and norms (like concern for the environment). What are some other ways UX professionals might be able to address the intersection between sustainability and design?

UX principles for robot design: Have we begun to baseline?

UX principles for robot design: Have we begun to baseline?

U

BOLD INSIGHT

As the robotics industry continues to find its way into our lives, we can begin to identify UX design principles to apply to this tech to increase the acceptance of robots and improve the human-robot interaction experience.

The bold future of UX: How new tech will shape the industry

Part 4  UX principles for robot design: Have we begun to baseline?

In a previous post, I discussed the challenges of designing a user experience for AI and how it needs three components to truly deliver on the promise of the technology: context, interaction, and trust. These three elements allow for a good user experience with an AI. Today, we’re taking AI to a related area: robotics. A robot is essentially an AI that has been given a corporeal form. But the addition of a physical form, whether or not it’s vaguely humanoid, creates further challenges. How do users properly interact with a fully autonomous mechanical being? Since this fully autonomous mechanical being can, by definition, act on its own, the flipside to this question is just as important, how does a robot interact with the user?

Before we dive into these questions, let’s all get on the same page about what a robot is. A ‘robot’ must be able to perform tasks automatically based on stimulus from either the surrounding environment or another agent (e.g., a person, a pet, another robot, etc.). When people think of robots, they often think of something like Honda’s ASIMO or their more recent line of 3E robots. This definition would also include less conventional robots, such as autonomous vehicles and machines that can perform surgery.

A research team at the University of Salzburg has done extensive research on human-robot interaction by testing a human-sized robot in public in various situations. One finding I found particularly interesting is that people prefer robots that approach from the left or right but not head-on.

In San Francisco, a public-facing robot that works at a café knows to double-check how much coffee is left in the coffee machines and gives each cup of coffee a little swirl before handing to the customer.

While a robot in Austria approaching from the left and a robot in San Francisco swirling a cup of coffee might not seem related, it points to UX principles that should be kept in mind as public-facing robots become more ubiquitous:

  • A robot should be aware that it is a robot and take efforts to gain the trust of an untrusting public (evidenced by people’s preferences for robots to not approach head-on and to always remain visible to the user)
  • A robot should be designed with the knowledge in mind that people like to anthropomorphize objects (evidenced by people preferring the coffee-serving robot to do the same things a barista might do even if it’s something the robot doesn’t necessarily need to do)

As with all design principles, these are likely to evolve. Once robots become more ubiquitous in our lives and people become accustomed to seeing them everywhere, different preferences for how humans and robots interact may become the norm.
This may already be the case in Japan, where robots have been working in public-facing roles for several years. While anthropomorphic robots are still the dominant type of bot in Japan, there is now a hotel in Tokyo staffed entirely by dinosaur robots. The future is now, and it is a weird and wild place.

What are your thoughts on all of this? Comment below and let’s get a dialogue started!

This blog post is part four of a series, The bold future of UX: How new tech will shape the industry, that discusses future technologies and some of the issues and challenges that will face the user and the UX community. Read Part 1 that discussed Singularity and the associated challenges with UX design , Part 2 which provided an overview of focus areas for AI to be successful , and Part 3 which dug further into the concept of context in AI

The critical component missing from AI technology

The critical component missing from AI technology

U

BOLD INSIGHT

The first step when developing AI is to understand the user need; but just as critical, is knowing the context in which the data is being collected.

The bold future of UX: How new tech will shape the industry

Part 3  The critical component missing from AI technology

In our last post on artificial intelligence (AI) , we discussed the three pillars that AI needs to consider to be successful: context, interaction, and trust. In this post, we will dive deeper into the idea of context.

It’s no secret that AI is a hot topic in virtually every industry; how to apply it, how it will advance the industry, how it will improve the experience for the customer. It was a major topic at the 2018 Consumer Electronics Show (CES), and articles that either expound the virtues of AI or predict that it will be humankind’s downfall are in the popular press on a regular basis. It’s clear that while the opportunities are seemingly endless, there is a critical component missing from much of the AI technology out there: In what context is the data (that allows AI to learn) being collected?

Don’t build it just because you can

When we think of the buzz around AI, we must pause to ensure we are “not building AI just because we can.” While the opportunity is great for efficiency, people will hear this statement and immediately fear for their jobs. But successful manufacturing companies know that the key is striking the right balance between robots and people. The first step is to understand what user need is addressed with the robots. Some examples of this include:

What’s missing, and is currently doing a disservice to AI, is context. Around Valentine’s Day, a story came out where AI was asked to come up with new Valentine’s Day candy heart messages. But without context, it produced quite a few messages that would confuse (and possibly anger) anyone that received them. (I know I wouldn’t want to receive a heart that said “Sweat Poo” or “Stank love”.)

When we build AI tech, there are three stages where context must be considered:

  1. Before it’s built: Beyond uncovering the user need that the tech will address, we must make sure that the context in which it will be used gets into the AI process. This will ensure we collect the right data.
  2. During: When the data goes in, it must have context. For example, if you are collecting data on behavior in a car compared to a bedroom or kitchen, it’s clear that the context would be important.
  3. Using the collected data: Currently, AI is a ‘black box’ – you throw in data and see what comes out. But the user must use AI to do something. If we take a user-centered design approach to how the insight might be used, this is when we will really see how powerful AI can be.

The potential for AI is astounding, and it will likely be one of the defining technologies of the 21st century. However, AI is only going to be as good as the data and information that we feed to it. By providing AI with the proper context for it to advance properly, we are helping to ensure that AI is delivering on its promise of simplifying life for the end users.

What are your thoughts on the idea of context in AI? Start the discussion by leaving a comment below!

The next post in our future tech blog series will move from software to hardware with a discussion around robotics.

This blog post is part three of a series, The bold future of UX: How new tech will shape the industry, that discusses future technologies and some of the issues and challenges that will face the user and the UX community. Read Part 1 that discussed Singularity and the associated challenges with UX design and Part 2 which provided an overview of focus areas for AI to be successful.

Three things to improve acceptance of AI

Three things to improve acceptance of AI

U

BOLD INSIGHT

To truly deliver on the promise of AI, developers need to keep the end users in mind. By integrating three components of context, interaction, and trust, AI can be the runaway success that futurists predict it will be.

The bold future of UX: How new tech will shape the industry

Part 2  Three things to improve acceptance of AI

Artificial Intelligence (AI) is one of the hottest topics in tech right now. Conversations around AI inevitably lead to dreams of a world where a computer is predicting every need one might have and/or the impending doom of humanity through a SkyNet / Ultron / War Games-type scenario.

As entertaining as that discussion might be, instead I’m going to focus on what AI needs to do to become more functional and more accepted by society (that is, users). As it stands now, technology (including some of the advances in AI) seems to be advancing simply because developers want to see if they could build it. What my colleagues and I want to see, as user experience (UX) professionals, is meaningful advancements in AI that deliver functionality that is useful for users.

To meet this goal, I sat down with my colleague Gavin Lew, who has recently been talking a lot about AI, to identify three things that AI needs to be successful:

  • Context – At its core, AI is based on pattern-recognition. Once AI learns a pattern, it can make predictions about outcomes of similar patterns. However, while we’re giving AI the raw data it needs to recognize patterns, we’re not giving it the context in which to make good decisions. Our take on this is that we are doing a disservice to AI by not giving it the proper context.
    • An example of this is IBM Watson Health. IBM Watson for Oncology was fed data from the Sloan Kettering Cancer Center and then suggested treatments for various cancer types all over the world. It was able to suggest the correct treatment for lung cancer over 96% of the time in India. However, in South Korea, it was only correct 49% when suggesting treatments for gastric cancer. Why? Because South Korea’s treatments for gastric cancer aren’t in line with Sloan Kettering’s recommended treatments. In other words, Watson was lacking the context needed to suggest the right treatment approach.
  • Interaction – Our understanding of user interactions with AI is still developing. The user interactions of AI are largely still unknown to most. How is someone supposed to use AI? Is “use” even the right term when it comes to AI? Once it is fully realized, a complex AI system will entail the systems of a home, car, office, appliances, and personal tech gadgets, all talking to each other and exchanging information without the user having to actively do anything. Thus, the user is seemingly not doing anything to use AI, while the system itself is passing and parsing data behind the scenes.
    • Think ahead to the future where you have your own personal AI. Our interactions with AI may consist of nothing more than an offhand comment, essentially interacting with the AI without knowing that we’re doing so. For example, when I’m making breakfast and mutter to myself, “Almost out of milk,” a strong AI will know to remind me at an appropriate time to buy milk. Or maybe it will just take the initiative and order me a gallon of milk from the automated grocery service in my area and there will be a milk delivery timed for when I get home from work. Or maybe I don’t need to state that I’m out of milk for the AI to act…. perhaps finishing the gallon of milk is my passive interaction and the AI figures out what the next logical step is by ordering automatically.
  • Trust – Trust in AI has been a recent topic of discussion in the tech sphere. For people to want to use AI on a regular basis, they need to trust it. The early buggy interactions people had with Siri scared them away from voice assistants to the point that most have not attempted to try Microsoft’s Cortana. A new form factor (i.e., Alexa) finally encouraged people to give voice assistants (read: AI) a second chance, and it was more widely accepted and used.
    • But why? Because of trust. Trust is created when a question is asked, and the right answer is given, when a task is given and correctly performed, when a purchase is made and the correct product was bought, and, possibly most importantly, when personal info is kept safe.

Once AI has the three components of context, interaction, and trust, it will be much easier for it to hit the mainstream and be the runaway success that futurists predict it will be. Even if the above three pillars are never fully recognized, to truly deliver on the promise of AI to the end users, the developers of AI systems need to keep the end users in mind since the AI is ultimately being created to benefit them.

What are your thoughts on all of this? Comment below and let’s get a dialogue started!

This blog post part two of a series, The bold future of UX: How new tech will shape the industry, that will discuss future technologies and some of the issues and challenges that will face the user and the UX community. Read Part 1 that discussed Singularity and the associated challenges with UX design.

Pin It on Pinterest