When building next gen fintech, start with research in Africa

When building next gen fintech, start with research in Africa

U

BOLD INSIGHT

The user experience (UX) of emerging FinTech might be considered superior in Africa compared to the United States. In Africa, the utility of cryptocurrencies has been communicated and made known, effectively improving the UX of this emerging technology.

The bold future of UX: How new tech will shape the industry

Part 5  When building next gen fintech, start with research in Africa

The finance industry is changing in massive ways as digital technologies advance. For the next installment of our Bold Future of UX blog series, we look at how the FinTech landscape has changed in recent years, where it is headed, and who is leading the way to build a better UX.

The blurring of lines between the finance and tech worlds has given us the ability to pay for goods and services using banking apps on our phones and watches, or entirely bypass the bank using Apple Pay, Samsung Pay, or Google Pay. This is all made possible by the digitization of currency.

I know I personally have very little physical interaction with my cash. Between direct deposit, debit cards, online bill-pay, and peer-to-peer payment apps like Venmo, the old mantra of “Cash is king” is slowly starting to fade away.

Bold Insight Managing Director Gavin Lew recently spoke at the Money 20/20 conference, where he discussed FinTech in Africa, a place where that adage of “Cash is king” has long been considered outdated. Physical cash has become impractical in many parts of Africa. As an example, local currency is tricky to deal with for the average person – Zimbabwe famously unveiled a 100 trillion dollar bill due to the rampant hyperinflation plaguing the country.

And while most African nations don’t experience a cash crunch of quite that magnitude, fluctuations in the value of the local currencies are somewhat commonplace across the continent. These fluctuations are so unpredictable that the idea of carrying around a physical wallet is almost a foreign concept since a meal at McDonald’s or a coffee at Starbucks might require anywhere from a briefcase to a suitcase full of cash. Even having bags full of the aforementioned 100 trillion dollar bills is currently not enough to buy groceries in Zimbabwe.

Enter FinTech. Specifically, cryptocurrencies and digital ledger systems.

Learning from Africa about implementing a usable and useful revolutionary currency system

In the US, the day-to-day use of cryptocurrencies, such as Bitcoin, is limited and still feels like we are at the infancy stage of development. There are online guides to show you where you can pay for goods with Bitcoin, but if you have to search for a guide of where to use your money, it should be taken as a sign that it hasn’t quite hit the mainstream yet. This is, in part, due to the fluctuations of the value of Bitcoin and in part to the somewhat confusing nature of cryptocurrencies and blockchain (how do I store it, how do I spend it, how do I buy/create it, etc.). In terms of utility, we’re now in a space akin to the early days of Apple Pay where people were ready to use their new Apple Watches but the infrastructure at the point-of-sale terminals wasn’t in place quite yet. Except in this case, not only are the sellers lacking the ability to accept cryptocurrency, but the customer is also not equipped (and likely doesn’t have the desire) to make a cryptocurrency payment.

While in the US these limitations might be enough to scare away the average potential user, in parts of Africa, they’re non-issues. Fluctuations in the value of Bitcoin don’t scare people in certain African nations since the national currency fluctuates on a regular basis. It’s also easier to obtain and spend Bitcoin in Africa; South Africa, for example, is set to expand its infrastructure of Bitcoin ATMs and POS systems.

The user experience of emerging FinTech might be considered superior in Africa compared to the United States. While in the US, we are still largely debating the value, legitimacy, and utility of cryptocurrencies, in Africa, they’ve moved past that debate and cryptocurrencies are already being used to buy goods. Not only do users understand Bitcoin and have the necessary tools to make payments with it, but, possibly even more important, the infrastructure for acquiring Bitcoin and exchanging payments are in place. The utility of cryptocurrencies has been communicated and made known, effectively lifting the user experience of this emerging technology.

It’s clear that the future of currency is digital, whether it’s dollars and cents, Bitcoin, Ripple, Ethereum, or even one of the many copycat currencies (Litecoin, Dogecoin, Garlicoin, etc.). Whatever the currency of the future is, there will obviously be a need to make it easy to store and easy to spend. Perhaps the real innovations will come from the retailers. Maybe Amazon is on to something with its brick and mortar stores where you pay by just walking out the front door…

What are your thoughts on all of this? Comment below and let’s get a dialogue started!

This blog post is part five of a series, The bold future of UX: How new tech will shape the industry, that discusses future technologies and some of the issues and challenges that will face the user and the UX community. Read Part 1 that discussed Singularity and the associated challenges with UX design , Part 2 which provided an overview of focus areas for AI to be successful ,  Part 3 which dug further into the concept of context in AI, and Part 4 which proposed UX design principles for robot design.

Bold Insight’s Gavin Lew to present at Money 2020

Bold Insight’s Gavin Lew to present at Money 2020

The premier conference in the payments, fintech, and financial services industries, Money 20/20 hosts over 11,000 attendees and will be held in Las Vegas, Nevada, on October 21-24, 2018. Bold Insight Managing Director Gavin Lew is teaming up with Visa’s Head of Design, Kevin Lee, to present, Humanizing the Experience in Retail. Lee will share how Visa approaches experience design and the key moments that reveal opportunities for brands. Lew will dive into those key moments with real world examples and how to make those opportunities successful.

“Retail is at an inflection point where companies need to embrace the combination of digital and physical to design a more humanized experience—one that recognizes humility because the future will depend on collaboration across the ecosystem. This is where disruption will occur,” says Lew.

Visit https://us.money2020.com/ to learn more about Money 20/20, including registration details. Don’t forget to use the coupon code DISRUPT to save $250.

 

About Bold Insight

Bold Insight helps clients deliver outstanding experiences to their customers by understanding user expectations and designing products that seamlessly fit into their lives. The team has conducted research on hundreds of products and services delivered on a variety of platforms, including websites, software, mobile devices, medical devices, voice assistants, connected devices, and in-car navigation systems.  Email hello@boldinsight.com to discuss your next project.

UX principles for robot design: Have we begun to baseline?

UX principles for robot design: Have we begun to baseline?

U

BOLD INSIGHT

As the robotics industry continues to find its way into our lives, we can begin to identify UX design principles to apply to this tech to increase the acceptance of robots and improve the human-robot interaction experience.

The bold future of UX: How new tech will shape the industry

Part 4  UX principles for robot design: Have we begun to baseline?

In a previous post, I discussed the challenges of designing a user experience for AI and how it needs three components to truly deliver on the promise of the technology: context, interaction, and trust. These three elements allow for a good user experience with an AI. Today, we’re taking AI to a related area: robotics. A robot is essentially an AI that has been given a corporeal form. But the addition of a physical form, whether or not it’s vaguely humanoid, creates further challenges. How do users properly interact with a fully autonomous mechanical being? Since this fully autonomous mechanical being can, by definition, act on its own, the flipside to this question is just as important, how does a robot interact with the user?

Before we dive into these questions, let’s all get on the same page about what a robot is. A ‘robot’ must be able to perform tasks automatically based on stimulus from either the surrounding environment or another agent (e.g., a person, a pet, another robot, etc.). When people think of robots, they often think of something like Honda’s ASIMO or their more recent line of 3E robots. This definition would also include less conventional robots, such as autonomous vehicles and machines that can perform surgery.

A research team at the University of Salzburg has done extensive research on human-robot interaction by testing a human-sized robot in public in various situations. One finding I found particularly interesting is that people prefer robots that approach from the left or right but not head-on.

In San Francisco, a public-facing robot that works at a café knows to double-check how much coffee is left in the coffee machines and gives each cup of coffee a little swirl before handing to the customer.

While a robot in Austria approaching from the left and a robot in San Francisco swirling a cup of coffee might not seem related, it points to UX principles that should be kept in mind as public-facing robots become more ubiquitous:

  • A robot should be aware that it is a robot and take efforts to gain the trust of an untrusting public (evidenced by people’s preferences for robots to not approach head-on and to always remain visible to the user)
  • A robot should be designed with the knowledge in mind that people like to anthropomorphize objects (evidenced by people preferring the coffee-serving robot to do the same things a barista might do even if it’s something the robot doesn’t necessarily need to do)

As with all design principles, these are likely to evolve. Once robots become more ubiquitous in our lives and people become accustomed to seeing them everywhere, different preferences for how humans and robots interact may become the norm.
This may already be the case in Japan, where robots have been working in public-facing roles for several years. While anthropomorphic robots are still the dominant type of bot in Japan, there is now a hotel in Tokyo staffed entirely by dinosaur robots. The future is now, and it is a weird and wild place.

What are your thoughts on all of this? Comment below and let’s get a dialogue started!

This blog post is part four of a series, The bold future of UX: How new tech will shape the industry, that discusses future technologies and some of the issues and challenges that will face the user and the UX community. Read Part 1 that discussed Singularity and the associated challenges with UX design , Part 2 which provided an overview of focus areas for AI to be successful , and Part 3 which dug further into the concept of context in AI

AI benefits from GPU, not CPU advancements

AI benefits from GPU, not CPU advancements

A quick follow-up to our blog posts about AI

The name of the game is no longer Moore’s Law where we see processors getting exponentially faster. AI technology is driven not by computing processes of the past, but from an evolution beyond central processing unit (CPU) advances to graphics processing unit (GPU)-based processors. These graphics chips used by gamers are being used by AI for their massively parallel-processing capability. As Talla commented, “We now have the equivalent of a super computer on a single chip. This allows image recognition to make a huge leap forward.”

Now, with AI, deep-thinking image identification is faster and more ubiquitous. By 2020, NVIDIA estimates there will be 1 billion cameras deployed for surveillance worldwide. But why do we build them? For public safety, parking, or customer experience of Disneyland? Where do we store this data, and how do we use the data?

What do you think? Join the discussion and comment below!

The critical component missing from AI technology

The critical component missing from AI technology

U

BOLD INSIGHT

The first step when developing AI is to understand the user need; but just as critical, is knowing the context in which the data is being collected.

The bold future of UX: How new tech will shape the industry

Part 3  The critical component missing from AI technology

In our last post on artificial intelligence (AI) , we discussed the three pillars that AI needs to consider to be successful: context, interaction, and trust. In this post, we will dive deeper into the idea of context.

It’s no secret that AI is a hot topic in virtually every industry; how to apply it, how it will advance the industry, how it will improve the experience for the customer. It was a major topic at the 2018 Consumer Electronics Show (CES), and articles that either expound the virtues of AI or predict that it will be humankind’s downfall are in the popular press on a regular basis. It’s clear that while the opportunities are seemingly endless, there is a critical component missing from much of the AI technology out there: In what context is the data (that allows AI to learn) being collected?

Don’t build it just because you can

When we think of the buzz around AI, we must pause to ensure we are “not building AI just because we can.” While the opportunity is great for efficiency, people will hear this statement and immediately fear for their jobs. But successful manufacturing companies know that the key is striking the right balance between robots and people. The first step is to understand what user need is addressed with the robots. Some examples of this include:

What’s missing, and is currently doing a disservice to AI, is context. Around Valentine’s Day, a story came out where AI was asked to come up with new Valentine’s Day candy heart messages. But without context, it produced quite a few messages that would confuse (and possibly anger) anyone that received them. (I know I wouldn’t want to receive a heart that said “Sweat Poo” or “Stank love”.)

When we build AI tech, there are three stages where context must be considered:

  1. Before it’s built: Beyond uncovering the user need that the tech will address, we must make sure that the context in which it will be used gets into the AI process. This will ensure we collect the right data.
  2. During: When the data goes in, it must have context. For example, if you are collecting data on behavior in a car compared to a bedroom or kitchen, it’s clear that the context would be important.
  3. Using the collected data: Currently, AI is a ‘black box’ – you throw in data and see what comes out. But the user must use AI to do something. If we take a user-centered design approach to how the insight might be used, this is when we will really see how powerful AI can be.

The potential for AI is astounding, and it will likely be one of the defining technologies of the 21st century. However, AI is only going to be as good as the data and information that we feed to it. By providing AI with the proper context for it to advance properly, we are helping to ensure that AI is delivering on its promise of simplifying life for the end users.

What are your thoughts on the idea of context in AI? Start the discussion by leaving a comment below!

The next post in our future tech blog series will move from software to hardware with a discussion around robotics.

This blog post is part three of a series, The bold future of UX: How new tech will shape the industry, that discusses future technologies and some of the issues and challenges that will face the user and the UX community. Read Part 1 that discussed Singularity and the associated challenges with UX design and Part 2 which provided an overview of focus areas for AI to be successful.

Pin It on Pinterest