When it comes to voice print UX, what is our role as researchers?

When it comes to voice print UX, what is our role as researchers?

During a recent study, we asked participants how long they thought they would have to speak in order for their voice to be uniquely recognized (i.e., voice print). While their estimates varied widely – from 30 seconds to 30 minutes – most people said about three to five minutes.

The reality is, voice prints can take as little as five seconds to do accurately if the correct phrases are spoken. So the question is, as researchers, should we recommend to design for what people think and feel is the correct length so they feel secure? Or should we collect the minimum?

While I understand why we might do what is comforting for users, I think our job is to convince people that voice biometrics are secure with the minimum amount of effort required. What do you think? Join the discussion!

Check out some of our blogs about voice!

Singularity and the potential impact on UX design principles

Singularity and the potential impact on UX design principles

U

BOLD INSIGHT

If we are approaching a rapid technology shift as some experts predict, core UX design principles will have to be redefined to adapt to radically different interaction models.

The bold future of UX: How new tech will shape the industry

Part 1  Singularity and the potential impact on UX design principles

The times they are a-changin’. I know it’s a corny, overused refrain but I don’t think that it has ever been truer. Technology, as well as its impact on society, is advancing at a rapid pace, and that pace is only expected to accelerate.

Futurologist Ray Kurzweil believes that we are soon approaching a point where the computing power of tech exceeds the computing power of people. This “Singularity”, as it is called, will be fueled by a variety of emerging technologies, including artificial intelligence (AI), robotics, and nanotechnology, to name a few.

Once this Singularity hits, Kurzweil and other similarly-minded theorists believe that life will be unrecognizable to what we know today. He compares the difficulty of describing this post-Singularity society to someone today as being just as difficult as describing to a caveman how different life will be with bronze tools and agriculture.

Bringing us back to the present, how does this relate to UX?

My thoughts around this weird unknowable world of the future have started to stray toward design. Let’s think about AI and (by extension) robots. These two technologies have the potential to completely flip the paradigm of usability and user experience. The user should not have to learn how to use AI. AI is supposed to be the one learning: learning our habits and routines and learning what actions it should take in response to what’s happening around it. In UX research terminology, the user has become the stimuli and the stimuli has become the user. That is, the human is now the stimuli that the technology is learning to react and respond to.

But if you buy into the whole notion of Kurzweil’s Singularity, how do you design for a future that is (predicted to be) wildly different than anything we’ve ever known or could fathom? How can a UX designer implement traditional usability principles, such as effectiveness, efficiency, and satisfaction, or are these principles going to become a relic and left by the wayside as radically different interaction models emerge?

I’m going to tackle some of these questions in future posts in this series. Next topic: Artificial Intelligence!

What are your thoughts on all of this? Comment below and let’s get a dialogue started!

 

This blog post is the first of a series, The bold future of UX: How new tech will shape the industry, that will discuss future technologies and some of the issues and challenges that will face the user and the UX community.

Am I satisfied or stuck? The impact of ecosystems on household users

Am I satisfied or stuck? The impact of ecosystems on household users

U

BOLD INSIGHT

Manufacturers building an ecosystem of devices and services should design for both a separate, personalized experience and household or shared experience.

The idea of connected devices and a connected home fascinates me – I’m all for anything that makes my life more convenient! I have Alexa in pretty much every room of my house; she’s even in my car. However, as I expand my connected home network, I have struggled with setting up additional devices and services. Powering them on and account linking is generally simple; the hard part is getting everything to work together.

In the case of Amazon devices (e.g., Echo) and services (e.g., Music Unlimited), if you are single or start from one family/shared email address, the connected home ecosystem is pretty simple. You have one account tied to all devices, Prime, and streaming products and services. However, once you introduce one or more additional family members, things get much more complicated.

In my case, my husband and I each had our own Amazon account when we met. Even when we got married, it didn’t make sense to share an account because we liked being able to have personalized recommendations and to keep our purchase history separate. Some years later, I stumbled across Amazon Household that lets you tie separate accounts together so you can share Prime benefits. After linking our accounts, I thought we’d truly have a “household” account that would allow us to share all services and content. Unfortunately, you can’t share everything (i.e, purchased content (video) and certain subscriptions).

Fast forward to my first Echo devices – I was so excited to set them up and try them out! But when I tested out the List functionality (‘Alexa, add milk to the shopping list’), nothing showed up in my app. Why wasn’t this working?! After trying different things (and a little cursing) I realized that I had set up the devices with my husband’s Amazon account since were gifts for him and therefore I had to sign into the Alexa app through his account, not mine. With Amazon Household, I didn’t think it would matter which account the Echos were tied to, but it does.

What is technically easy to set up, actually requires a high cognitive load each time I set up a device or access content because I have to remember which account I used for what. I currently have:

  • Amazon Prime account with my email address which is linked to my husband’s Amazon account (with his email address) so he can get Prime
  • Alexa app on my phone but signed in using my husband’s Amazon account for Echo devices and lists
  • Amazon Music Unlimited account signed in using my husband’s email address
  • Roav VIVA Alexa-enabled device in my car that requires me to sign into my Amazon app with my husband’s email address to get access to Music Unlimited, but to shop and see my recommendations, I must sign back into the Amazon app with my email

One could argue that I should have been more intentional when setting up all these devices and services. But in the moment I was so excited to get these things working that which account to use was the last thing on my mind. I’ve questioned if I should suck it up and start all over with a family account. But what would I gain? Possibly an easier setup process going forward and one account for everything, but lots of effort up front to reset everything. And what would I lose? Personalized recommendations, purchase privacy, and time!

Netflix and Hulu have overcome this multi-account hurdle with their ‘profile’ platform which generates separate watch lists and recommendations. Admittedly, they are much simpler systems with limited components.

There are huge benefits to having an ecosystem of devices and services in a home, whether it’s Amazon, Google, Apple, etc. The consumer benefits by (generally) having a seamless experience of integrating the devices and services and working from a similar interface or set of commands used across multiple devices. For the manufacturer, the benefits of having its ecosystem in a home means more loyal customers since, for the consumer, it can be difficult or impractical to try new devices when the home is entrenched in one ecosystem.

Many connected device manufacturers have created a great set-up-and-use experience with plug and play devices and simple mobile apps. However, manufacturers should think beyond the experience of a single user. Consider how a couple or family would set up, purchase, use, and add to the ecosystem. Consider couples who come with individual personal accounts and those who create a family account together. Also consider early adopters who have tied accounts to early versions of the system – ensure there is support to improve their experience as devices or new features are added. Some questions to ask include:

  • What content would users want to keep separate: purchase history, recommendations, watch/wish list, etc.
  • What content would users expect to share: purchased content, services, etc.
  • Can established individual accounts be tied together to form a true “household” account?

Ultimately, as the foothold of any ecosystem gets stronger, the user can either feel satisfied and happy or stuck and frustrated. And that feeling (satisfied or stuck) becomes associated with the brand.

Bold Insight team presents on voice interface design and artificial intelligence at UX Masterclass Milan

Bold Insight team presents on voice interface design and artificial intelligence at UX Masterclass Milan

During the 13th installment of the UX Masterclass, an annual conference that brings together user experience and digital innovation experts, Managing Directors Bob Schumacher and Gavin Lew will share insights on designing the latest technology and its impact on the user experience (UX) industry. A full-day event on March 22nd in Milan, the theme of this year’s event, Beyond the Screen, highlights the challenges that design and UX professionals face in a world with increasingly complex services and more conversational, multi-channel interactions.

Schumacher’s keynote, Voice user interface (UI): Forget everything you know about UI Design, will explore some of the ways that designers need to think differently about voice to deliver successful user experiences. Designing for the screen is inherently more defined and constrained than designing for voice interaction; in his talk, Schumacher will highlight considerations for organizations as they transition to ‘Voice First’ from ‘Mobile First’ design.

Bringing a focus on artificial intelligence (AI) & UX, Lew will discuss core elements from a book he is co-authoring in which the future of AI is explored through interviews with AI experts. He will illustrate successes and failures of AI through case studies and present a UX framework to pave the way for future success.

The UX Masterclass is hosted by members of UXalliance, a network of 25 leading UX companies around the world. For more information about the event, visit http://2018.uxmasterclass.com/.

 

About Bold Insight

Bold Insight helps clients deliver outstanding experiences to their customers by understanding user expectations and designing products that seamlessly fit into their lives. The team has conducted research on hundreds of products and services delivered on a variety of platforms, including websites, software, mobile devices, medical devices, voice assistants, connected devices, and in-car navigation systems.  Email hello@boldinsight.com to discuss your next project.

Enough about us, let's talk about you!

We’d love to have an opportunity to bid on your project.
Responsiveness is important to us so we promise to get back to you immediately!

+1.630.317.7671

hello@boldinsight.com

Designing for Voice: The next phase of UX design

Designing for Voice: The next phase of UX design

U

BOLD INSIGHT

Good design involves understanding and incorporating core UX elements (people, environment, and tasks), but frictionless voice UI design needs more: UX designers need to understand the idiosyncrasies of voice as a medium. What we learn, and how those insights are translated into a voice interface design is a new challenge for UX design.
The next wave of user experience (UX) has arrived: Ambient computing. Enter a room, tell Alexa to turn on the light; ask Google Home what your meetings are for the day. No keys, no clicks, just voice. Given the Alexa v. Google Home smack down at the Consumer Electronics Show (CES) this year, it occurred to me that tech wasn’t the innovation this year, it was interaction. That is, the real innovation at this moment is how we will interact with our devices. While the show floor of CES tends to be filled with a lot of hardware, it was clear to me that UX design has moved from early web to mobile to the next stage: voice and voice computing.

Companies are going to fail at the first attempt at voice

In some respects, “voice-first” is overtaking “mobile-first”. The reality for many organizations, is that voice is a viable channel in which they can deliver their service. Right now, entertainment is leading the voice space, but with the launch of Alexa for business and voice assistants integrating into many environments, we are seeing the diffusion across technologies. The interesting point, however, is that many organizations will first attempt at voice for the same reasons the early ‘jumps’ to mobile failed – because the temptation is to simply port from one channel/medium to another. Failures will happen because users don’t have the same expectations for voice user interface (UI) that they do for screens. Voice interaction (i.e., dialog) is already something users are experts at, so they are not going to blame themselves if they can’t do something, they will blame the interface. Voice interaction is also about what people can remember and produce accurately, rather than what they can recognize and control on a screen. The challenge for organizations is to design a natural-feeling voice interaction that adapts with the customer.

Considerations when designing for voice

We find while natural language understanding has gotten remarkably better and we can respond to voice interfaces in an interesting way, people talk to machines differently than they do to other people. Users see this interaction with voice assistants as a command-driven system (e.g., “Turn on the light”) with imperative statements. With command-driven interaction, the design is a linear path. However, most human dialogs are not like that; they are give and take – turn-taking with a set of implicit rules that machines do not know. There is also ambiguity and variability in dialogs. With a drop-down list, the items are pretty much the same every time. But if Alexa responded with the same phrases every time, it would be very artificial – you would know you’re talking to a machine. For instance, when I ask Alexa to add something to the shopping list sometimes she says, “Carrots added to your shopping list” and sometimes she says, “I’ve put carrots on your shopping list”. These little variances are more indicative of human response and make for a richer interaction. This is where an understanding of how dialogs work is critical. We must retain information from prior parts of the conversation, know the context (in linguistic terms ‘pragmatics’) and know the goal of the dialog. The whole point is to design the voice interaction so it is frictionless. Back to the notion of ambient computing – I don’t need glasses or even light or fingers to interact. I should be able to just ask Alexa to turn up the thermostat and it’s done. But, of course, it’s not that easy – it requires a lot of hard work to design the voice UI.

The next stage in UX design

The current market uptake, and the splash that voice tech made at CES, is a reflection that technology is ready for the next wave of UX design. Organizations should start simply and answer the question, “what can I build that adds value to customers?”. Why do people call the call center? Why do people send emails? Begin to unpack some of the frequent things that customers do and then design voice services around that. Then look at where interactions go wrong: Design the error conditions, design the input conditions, design better output conditions, how do you provide feedback, understand the when quantity of the information is sufficient before the quality of the information. In order to get to a frictionless UI design, UX research is not all that different from building other UIs. It’s about the people, the environment and the tasks. UX researchers do need to understand the idiosyncrasies of the voice channel: What we learn, and how those insights are translated into a voice interface design is a new challenge for UX design.

Pin It on Pinterest