The Opportunity and Risk of Generative AI, Part III: Responsible AI Ethics

ABSTRACT: Responsible AI ethical principles provide a clear, unifying purpose for the technological, business, and social goals of AI initiatives.

Ethics are meant to be good, yet when you talk about them people react like you’re a villain. You can almost hear the echoes of groans from the last time ethics was brought up in a meeting. This does not come as a surprise. Ethics involve ambiguous questions and morally loaded consequences. That’s why people tend to shy away from considering ethics and focusing on easier questions. “What is the expected return on this project?” and “How accurately will this model classify winners and losers?” are easier to answer than “How do we make this AI better at helping people without infringing on personal rights?” 

Responsible AI seeks to simplify things. It offers ethical frameworks and tools that operationalize abstract ethical principles to help create great AI solutions. This blog is the third in the series, “The Opportunity and Risk of Generative AI.” The first blog discusses the incredible potential of artificial intelligence, specifically the most recent developments in generative AI. The second blog delves into the legal considerations of AI and discusses how responsible AI can help manage regulatory compliance. I recommend you read both, but especially the second as it covers the legal aspects of responsible AI. This blog will consider the related ethical aspects of artificial intelligence through a responsible AI lens.

To recap, we currently face an arms race similar to the development of nuclear weapons and energy in the 20th Century. With incredible speed, companies are developing a new technology with incredible potential for both positive and negative impact around the world without much regulation. Accepting that we are unlikely to stop this development, we must understand and manage the risks of AI by using the best tools and systems. Responsible AI encompasses both the regulatory and ethical considerations of artificial intelligence and provides the framework to tackle the risks in each domain. 

Let’s consider how responsible AI ethics bring big, abstract goals to earth and align them with technical and business goals.

Ethical Principles of Responsible AI

A common set of principles helps align business and engineering decisions. The ethical principles of responsible AI tie these two viewpoints together with social goals. From a business perspective, these principles demonstrate a commitment to customer interests and increase customer trust. From a product development perspective, ethical requirements should be considered an enhancement or value-add to an AI system. From a regulatory viewpoint, these principles align with and strengthen compliance programs. The benefits are substantial and real, going far beyond the marketing strategy of saying “we are a good business because we make ethical products.” Some may say this taints the intent of a noble endeavor, but that’s fine. More importantly, these ethical principles can appeal to all stakeholders including those with ignoble motives.


Ethical principles can appeal to all stakeholders, including those with ignoble motives.


Figure 1. Common responsible AI ethical principles

Every company has a set of principles or values they stand by. Responsible AI principles help organizations create safe, beneficial, and robust AI systems. The most important principles, many upheld by prominent AI institutions, include the following.

  • Fairness. Fairness refers to equitable processes and outcomes, specifically avoiding bias in a model. An AI’s outputs and decisions do not discriminate against any protected class of people. Data engineers must examine the training data to ensure it does not represent a biased view of the world. Then, data scientists have to create decision-making algorithms that do not make biased decisions when given an accurate model of the world.

  • Transparency. This refers to the openness of use cases and intent of AI system development. This way a company cannot hide malicious use of AI. The other critical aspect is the explainability of how a specific AI system works. The goals and processes of an AI must be understood and clearly stated. This protects against black-box (models making decisions without humans understanding their process and their effect.)

  • Accountability. Everyone involved in AI development and deployment shares responsibility for the results. Skin in the game incentivizes everyone to hold themselves and each other accountable.

  • Privacy. Respect for privacy goes beyond compliance with regulations such as GDPR. Designers should respect an individual’s right to privacy and avoid abusing any loopholes in regulation. This ensures positive use of data and trust with customers.

  • Security. AI models have access to training data as well as real-time access to sensitive, valuable data in certain use cases. Data access controls help keep data out of unauthorized users hands while secure infrastructure keeps hackers out. Good security recognizes the value of data and demonstrates a commitment to protect customers.

  • Human-centric. AI should be developed to benefit humans, which of course requires close human involvement. Human-in-the-loop systems mandate supervision and guidance as key steps to mitigate risks. This keeps systems honest and helps maintain explainability.

  • Positive impact. AI systems should be designed to create measurable, positive impact on all stakeholders, including humans and the environment. Before full deployment, AI outputs should be tested to confirm they actually had a positive impact. This also requires avoidance of harm. The positive impact should greatly outweigh the negative impact.

From Principles to Policies to Implementation

Principles are too abstract to be used as policy. Stating “privacy is important” accomplishes very little. Instead, executives and policymakers must develop a system of operating policies for data and AI that are motivated by the company’s stated ethical values. For example, consider a company that has adopted accountability as a guiding principle. One policy derived from accountability would be that data scientists are primarily responsible for the outputs of their models. Thus, they are responsible for extensively testing outputs to avoid bias and limit hallucinations and for reporting these results to their manager before implementation. In this case, the principle of avoidance of bias, a.k.a. fairness, also motivates this policy. Oftentimes ethical principles will align with each other as well as business values such as customer trust.

In some cases, principles can conflict with one another. These cases require more deliberation and precise policy to avoid contradiction. A fundamental issue with complex AI models, especially neural network type models like LLMs, comes as a lack of transparency. The incredibly complex structure of neural networks, which might have more than a trillion parameters, allows them to accomplish amazing things and improve people’s lives. However, this complexity and the hands-off nature of training methods results in models whose structure we don’t immediately understand and may not be able to fully understand. A lack of transparency and understanding of a model brings risks of hidden bias, lack of reproducibility, and security issues. 

This problem, found in many AI models, involves a trade-off between performance and explainability. Compliance officers must decide how to maximize the positive good of a better model while maintaining acceptable levels of explainability. A responsible AI ethics team–similar to a responsible AI compliance team discussed in the previous blog–can deliberate to resolve more difficult issues. Team members would include representatives from management, legal, engineering, and governance teams. As in any judicial system, judgements can then be used as precedent for later challenges and codified into policies.


Compliance officers must decide how to maximize the positive good of a better model while maintaining acceptable levels of explainability


As an AI program develops, new challenges will help shape improved policies and operations. Practitioners can also employ audits to assess their responsible AI ethical system. Internal audits can be held regularly to assess that the system realizes the company’s ethical AI principles. To get a more objective and highly expert opinion, an external audit can be performed by third parties, including technology companies such as Credo AI and consulting companies such as KPMG. The next blog will consider one such case and its strengths over an internal audit.

Developing a culture of responsible AI

Policies and procedures can only go so far. In order to create a culture of ethical AI, leaders must get buy-in from all employees. And buy-in can be difficult due to the nature of ethics. Doing the right thing often requires additional effort. In addition, being found to not be ethical has serious moral and emotional implications. To get buy-in, employees must understand the purpose of an ethical culture and see the benefit of it, for themselves and their work. This relies on sufficient education and training for all employees. An ethical culture encourages open discussion of ethical concerns and empowerment to raise concerns to leaders. Procedures, like dedicated meeting time to review responsible AI ethics, can help maintain commitment but these principles must be championed by leaders and actively encouraged in everyone, especially engineers developing models.

Conclusion

A responsible AI framework has a foundation of ethical principles. From these principles comes policies that set the structure and processes that directly shape development, deployment, and governance of great AI systems. The framework falls apart without a culture to glue things together. Fundamentally, a healthy ethical AI culture needs employees to embody responsible AI principles, in their interactions and in their work.

Dan O'Brien

Dan O’Brien is a research analyst intern at Eckerson Group. His interests are at the intersection of artificial intelligence, data governance, and technology ethics. He studied philosophy and mathematics at...

More About Dan O'Brien