Understand users’ expectations for balancing information access and privacy when using AI-enabled tools

A global technology company sought to understand user privacy and safety thresholds in a global qualitative & quantitative research study.

Challenge

We were asked to determine when and how much users expect to be shielded from false, misleading, or offensive content when searching for information in any platform. Additionally, we sought out how various design factors influence users’ expectations for access vs. safety, and their expectations about our client’s role in protecting them from harmful content.

Approach

We conducted 30,000 quantitative surveys and 100 qualitative interviews with users from Brazil, Germany, Japan, Nigeria, and the United Sates to understand the relative importance of specified attributes (e.g., information source, private vs shared device, suggested vs requested information etc.) that contributed to people’s sense of how to balance information access versus safety.

Outcome

We delivered risk rankings to capture the total picture of how the attributes sat on a spectrum of where people place these attributes. We also provided global research partner management and a consolidated report spanning the entire scope of research.

Industry:

AI, Consumer technology

Method/Process:

Exploratory / Foundational, Global research, In-depth interview, Survey

Stimuli:

AI-enabled product