AI Governance: Safeguarding innovation in an evolving regulatory landscape

February 28, 2024

At CES 2024, one of the most enlightening panels, Harnessing AI Innovation While Governing Risk, delved into the intricate balance between AI innovation and the imperative need to govern its risks. The panel featured experts who discussed the challenges associated with launching AI-enabled products, the pivotal role of various company departments in governing AI use, data management, and the necessity for companies to prepare for impending regulations.

AI innovation: The perils of rushing to market

A key takeaway from the panel was the emphasis on thoughtful product releases regarding risk, cautioning against the rush to be the first to market. While innovation should not be stifled, companies must consider if an early release is reckless or risky merely to gain a competitive edge. One of the experts on the panel from Adobe shared that they opted to train their AI-enabled product on a smaller, compliant dataset rather than a larger dataset that scraped the internet. This decision, though potentially affecting product quality, ensured compliance with copyright laws and aligned with Adobe’s core principles. Companies are urged to weigh the risks to their brand when launching products without proper guardrails.

The role of AI ethics panels

Another crucial discussion point was the importance of considering the broader implications of AI-enabled products once they are released, particularly how they might be misused. The panel advised that companies include representatives from key departments, such as Chief Legal and Chief Trust Officers, in decision-making processes related to AI. Examples were shared of companies like NVIDIA and Adobe, which have established AI Ethics Committees to scrutinize the impact of their products on the market and implement safeguards against potential misuse. This approach ensures that powerful AI capabilities are deployed responsibly and align with industry regulations.

Managing data and reinforcement learning

The panel also touched upon the significance of data management in mitigating risks associated with AI. Companies are encouraged to have rigorous processes for cataloging, maintaining a history of, and tracing the origin of data. This includes categorizing data into three types: data grown internally, licensed data, and data scraped from the internet. Additionally, the importance of reinforcement learning in filtering and training data sets, especially those that require cleaning, was highlighted as a crucial step in ensuring the quality of AI models.

Transparency in AI was also emphasized, with two options proposed: using technology to detect fake content or implementing content credentials like a nutrition label or metadata. The latter option empowers consumers to understand how content is created, identify potential limitations of the results, and enable them to make informed decisions.

Navigating regulations

A significant discussion point was the impact of regulations on the adoption of emerging technologies like AI. While the AI Act is nearing reality, there are still issues to be addressed. Companies are advised to identify existing industry regulations and ensure compliance. They should also consider the gaps in regulations, such as the grey area of copyright laws globally, and seek legal review to mitigate risks.

Ultimately, these experts highlighted the importance of balancing AI innovation with risk governance, emphasizing the need for thoughtful product releases, the establishment of AI ethics committees, robust data management practices, compliance with regulations, and transparency in AI processes. It’s also worth noting that with any new tech, we don’t know what we don’t know. However, taking these steps and adhering to these principles, companies can harness the power of AI innovation while mitigating potential risks and ensuring responsible deployment.

Measuring the tools for successful AI products

The success of an AI-enabled product in today’s market depends on establishing trust with its users. We’ve conducted years of user experience (UX) research within the spaces of AI and human-machine interaction, including experience measuring levers for trust and safety within emerging AI technologies. If you have research needs related to AI, reach out – we’d love to hear from you.