Blog

What medtech design teams need to know about AI regulations

|
June 27, 2023

This year, artificial intelligence (AI) is the tech on everyone’s minds. Consumers are fascinated with generative AI tools like ChatGPT and Midjourney. And manufacturers are racing to build AI-powered products across virtually every industry – including the medtech space.

But around the world, new regulations are setting up guardrails for AI use, from mandatory informed consent to regular health and safety risk assessments. Take note: these regulations are poised to transform the way manufacturers design digital products.

Here, I’ll explain what these regulations mean for medtech design – and how human factors research teams can help manufacturers build ethical products that establish trust.

 

China and the European Union are establishing ethical standards for AI

Most governments worldwide lack comprehensive standards for ethical AI use. But regulatory bodies in China and the European Union (EU) are establishing frameworks to evaluate the way AI collects user data, interacts with users, and outputs recommendations.

The reason? Without clear and documented AI ethics standards, a medtech manufacturer could release AI-powered physician copilot software that outputs inaccurate care recommendations. Or a manufacturer might sell patients’ fitness wearable data (logged in an AI-powered companion app) without their knowledge and express consent. And these are only a few ways that unregulated AI use can run amok.

The bottom line: to protect users’ rights to privacy and safety, manufacturers must design AI solutions under common ethics rules. Currently, China and the EU are building a model that other countries may soon follow.

In March 2022, China adopted new regulations for “algorithmic recommendation services.” In the medtech space, these might include, say, a diabetes management app that analyzes user-logged health data to recommend dietary changes. Under these guidelines, companies must:

  • Clearly inform users about how AI algorithms are handling their data (e.g., to push recommendations or support future software development)
  • Provide an opt-out feature for all algorithmic profiling
  • Regularly review their AI’s data privacy controls

Now, it’s likely these regulations will be further refined in 2023 to target the generative AI space. According to recent draft measures, all generative AI tools will need to undergo a security assessment before becoming eligible for public use. (It’s currently unclear what that assessment will entail, but regulators may share more details in the coming months.)

Similarly, in the European Union, legislators are reviewing a proposed Artificial Intelligence Act. Projected to pass later this year, the legislation’s purpose is to evaluate a given AI system’s impact on individuals’ health, safety, and fundamental rights. To this end, the act will classify AI systems on a four-tier risk scale:

  • Unacceptable risk
  • High risk
  • Limited risk
  • Minimal or no risk

AI for medical use (e.g., a care recommendation algorithm for physicians) is likely to fall in the “high risk” category. As a result, manufacturers should expect to complete regular data privacy risk assessments and log the way they handle user data. (As with China’s draft measures, we don’t yet know the specific content of these risk assessments. But we should learn more in due time.)

If you’re designing medtech intended for global adoption, it’s important to understand how these regulations will shape your approach. Let’s look at the two of the biggest hurdles design teams will need to overcome in this new regulatory environment.

 

The first hurdle: meet baseline regulatory demands

While China’s and the EU’s respective AI regulations aren’t identical, they’re both meant to promote transparency and ethics for AI-powered products.

To meet these baseline requirements, we suggest that medtech design teams conduct a three-part analysis for each digital product. Make sure you can answer these questions:

  1. What’s the intended use of your product? You’re likely familiar with this concept. But for current and future AI regulations, you’ll need to clearly identify what your product is (e.g., an AI-powered companion app for diabetes management devices), who it’s intended for (e.g., diabetes patients, healthcare providers, and caregivers), and its expected use environments (e.g., at home on a mobile device and in a clinic).
  2. What controls limit your product’s intended scope of use? Be sure to clarify who can access your product and how you’ll ensure appropriate use. For instance, you might make your software available by prescription only and require patients to undergo training by a physician before they can use your software.
  3. What are you doing with the data it collects? Be specific about what data your product collects, where it’s stored, how it’s being used, and if and when it’s disposed of. Make sure to provide each user group with these details before they begin using your product, and provide an opt-out feature for users who don’t want to make their data available.

Alongside this analysis, it’s a smart idea to document your software code’s source, scope, and function. While regulations aren’t currently projected to require that you submit this information for review, it might prove necessary in the future.

 

The second hurdle: build user trust in AI-powered products

China and the EU’s regulations are a critical starting point for AI ethics. They also point to a bigger matter: a trust problem with AI-powered products.

The reality is that many people don’t trust new technology, much less AI systems. Even with rules in place to encourage ethical design, digital products will struggle to reach users without addressing the trust gap. And for medtech design teams, trust is key to driving software adoption that can transform clinical workflows and positively impact healthcare outcomes.

To foster that trust, design teams need to conduct research in order to accurately evaluate users’ confidence in AI-powered tech and take steps to close the gap.

At Bold Insight, our human factors researchers help design teams measure:

  • How much users trust technology. Maybe a patient’s been the victim of a large-scale data breach. Or perhaps they never learned how to operate a smartphone. If users don’t trust technology, they won’t trust your AI-powered product. In cases like these, human factors researchers interview users to understand their familiarity and comfort level with technology. From those insights, they can then recommend ways to account for varying trust levels in your product design.
  • How much users trust AI. The current AI buzz is exciting – but many people are cautious of AI’s potential to deliver false information or even exceed human capabilities (potentially including our ability to control it). With early exploratory user interviews, our researchers can help teams gauge users’ AI skepticism before a product goes to market and identify ways to address those concerns through UX design.
  • How each user group’s demographics impact trust perceptions. Age commonly impacts perceptions of trust: younger people, for instance, are typically more comfortable with new technology than older folks. But demographic factors like race and ethnicity also play a role. When designing a diabetes management app, for example, teams should understand the extent of medical mistrust among Black Americans (who disproportionately suffer from diabetes).

A thoughtfully designed study accounts for demographics and ensures that a product is tested consistently with each target user group to accurately gauge perceptions and drivers of trust.

 

Human factors research is the key to ethical, trust-centered AI

The future of medical AI depends on an ethical, trust-centered design environment, both to meet emerging regulations in global markets and to build trust among users. The right human factors research partner can guide your product development and efforts to build this trust–and Bold Insight has the industry experience to help.

If you’re interested in starting a conversation, reach out – we’d love to hear from you.