The Opportunity and Risk of Generative AI Part II: How Responsible AI Assists Compliance

ABSTRACT: Responsible AI can help data leaders comply with the fast-evolving regulatory environment of data and artificial intelligence.

Data leaders face a regulatory headache as they hatch plans to use generative AI for their business. Regulation of data expands every year, and different jurisdictions (continents, nations, and states) have very different laws. The United States and European Union have introduced drafts of AI-specific regulations that aim to control the explosion of AI, particularly generative AI. Companies need a comprehensive system to manage regulatory compliance while developing their generative AI capabilities. Responsible AI provides a framework and set of tools to do just that.


Responsible AI provides a framework and tools to help manage regulatory compliance while developing generative AI.


This blog is the second in a series on generative AI governance. The first blog in the series profiled the risks of generative AI and called upon data leaders to help shape its development. To recap, generative AI refers to any computer system that creates content (text, codes, images, etc.) from a large set of training data. These systems have become incredibly powerful and easily accessible thanks to advancements in hardware and software as well as the huge amount of data available today. However, these systems also carry significant risks.

Generative AI increases familiar data risks such as security, privacy, quality, bias, and regulatory compliance. On top of the data risks come model risks, which include autonomous decision-making and responsibility, explainability, and malicious use, such as AI-driven cyber attacks and misinformation campaigns. As the use of generative AI explodes, data leaders must understand and manage these risks so that the consequences of generative AI do not overwhelm the benefits.

This blog introduces an industry-based responsible AI framework and its broad goals. Then it explores the legal side of responsible AI, highlighting key methods and tools. Finally, it surveys the current regulatory landscape for data and artificial intelligence.

Responsible AI

Responsible AI commonly refers to principles, frameworks, and tools that help companies buy, develop, and govern AI systems. It has two complementary branches: legal and ethical. Both branches help answer the central question of responsible AI: how does my business, and the industry at large, best use AI? Thankfully they work together to answer this big question because “How do I not break the law?” often overlaps with “How do I do the right thing?” Ideally, laws and regulations come from ethical principles—and following ethical principles keep people on the right side of the law.


Responsible AI is legal AI and ethical AI.


Google, Microsoft, and most AI companies have made public commitments to the responsible use of AI. It’s encouraging that leaders have shown concern for AI use, but we also face a new wave of adoption of generative AI. The worry is that there are so many types of AI and use cases that it will prove hard to regulate, increasing the risk of harm. 

The essence of Responsible AI lies in the ethical principles that help guide company goals, policies, and culture. However, this blog will begin outside the company, in the broader legal landscape of artificial intelligence, where universal principles and rules are agreed upon (or not, as you’ll see). Legal rules should benefit society as a whole and establish the guardrails that companies must follow as they compete to benefit themselves.

Many companies do not frame AI regulatory compliance and advising as a significant part of responsible AI–check, for example, Google’s responsible AI practice for mention of regulation. This is a mistake. Regulatory compliance will consume a huge portion of the resources of AI projects, and legal considerations align with ethical considerations more often than not. Because the regulation of AI has just begun, experts and industry leaders can and should be involved in shaping the rules of AI. It’s in every company’s interests, both large and small, to balance innovation and protection of society at large from super-powerful AI systems. This can only be accomplished with active involvement in, encouragement of, and compliance with AI legal regulation by the private sector.

The Legal Branch of Responsible AI

Three legal features contribute to Responsible AI: a compliance team, audits, and platform functionality.

  • Compliance team. Stakeholders from the legal, executive, IT, and development teams make up an AI compliance team. They work together to develop policies and workflows that ensure company practice follows regulations. The team also oversees legal AI education so that everyone involved in the development of AI understands how regulation shapes their work. The compliance team reviews adherence to policies and investigates serious cases of non-compliance. It serves as a proactive, forward-looking group that anticipates and quickly adjusts to the changing regulatory environment.

  • Audits. A legal audit is a powerful tool for evaluating regulatory compliance. Internal audits headed by the compliance team should have a predefined schedule and standardized process for evaluation of the compliance system or parts of the system. Most importantly, the compliance team should define appropriate actions in response to non-compliance. Audits serve as evaluation points to continue improvement in compliance.
    An external audit has even more weight than an internal audit. With an outside party’s evaluation comes a level of objectivity and honesty that can be difficult to achieve internally. Willingness to undergo an external audit demonstrates a real commitment to compliance and enhances trust in your commitment to responsibly using AI. As we’ll discuss more in the next blog, companies also can combine the legal audit process with an ethics audit.

  • Platform functionality. Many data and governance systems have integrated compliance mechanisms. For example, they might automate the masking of personally identifiable information (PII) related to customers or employees. Some software helps manage compliance with multiple, sometimes competing, regulations—for example, when GDPR’s ‘right to be forgotten’ policy conflicts with tax record requirements.

The Current Legal Landscape

Any leader looking to start a generative AI project should have a basic understanding of the current legal landscape. This understanding in no way replaces the expertise of a dedicated compliance team, but it does help with broader decision-making. Leaders may decide to continue prototyping generative AI solutions and wait to invest significant resources until the consequences of these new AI laws are understood.

Currently, laws that apply to generative AI have been data-focused laws. These laws, in particular the General Data Protection Regulation (GDPR) and California Consumer Protection Act (CCPA), help protect individual privacy through data regulation. Because generative AI risks go beyond data risks, regulators have begun drafting AI-focused laws. The EU AI Act and US Blueprint for AI Bill of Rights are the most significant drafts of AI legislation coming soon. (See Figure 1)

Figure 1. Key Regulation Relevant to AI

Generally, the EU created stronger laws than the US and gave more authority and funding to regulatory bodies. Therefore international companies spend more resources to comply with these regulations. The upside has been that remaining compliant with the most stringent regulation makes compliance with any other regulations easier. While the US federal government has allowed states to dictate data laws so far, its new AI blueprint suggests federal legislation is on the way. The Chinese government, meanwhile, just released the Interim Measures for the Management of Generative Artificial Intelligence Services.

Conclusion

Regulation of AI includes data regulation and AI model regulation in a landscape that varies by jurisdiction and changes frequently. Responsible AI, the legal branch in particular, helps companies navigate these challenges, remain compliant, and develop and govern useful and profitable AI solutions. The next blog of this series will delve into ethical principles of responsible AI and how they guide risk management, in particular generative AI.

Dan O'Brien

Dan O’Brien is a research analyst intern at Eckerson Group. His interests are at the intersection of artificial intelligence, data governance, and technology ethics. He studied philosophy and mathematics at...

More About Dan O'Brien