Mitigating AI’s Unintended Consequences

ABSTRACT: Learn how to optimize the value of AI by applying second-order thinking to address unintended consequences.

Unintended consequences refer to the unforeseen and often negative effects that result from a particular policy, decision, or action. While they arise due to the difficulty in predicting effects in complex systems, they often occur from a lack of second order thinking.  The key question is whether we fully and objectively evaluated the short-term and long-term impact across multiple dimensions. These include financial, operational, stakeholder, societal, environmental, and ethical impacts.

“People who overweigh the first-order consequences of their decisions and ignore the effects of second- and subsequent-order consequences rarely reach their goals.”

Ray Dalio, Chief Investment Officer, Bridgewater Associates

As a case in point, research from Hugging Face and Carnegie Mellon University found about a 30x  difference in energy consumption for basic sentiment analysis—in this case, to classify movie reviews—based on the AI approach. Neural network approaches such as Large Language Models (LLMs) are energy intensive, whereas Naïve Bayes classifiers are computationally efficient and have a low memory footprint. Naïve Bayes is particularly suitable for text classification tasks and can be trained quickly with fewer computational resources than neural networks.

Using Large Language Models instead of Naïve Bayes classifiers, however, will lead to increased processing costs as well as greater energy consumption. Many organizations already mandate more robust financial management of cloud resource consumption due to insufficient visibility and unexpectedly high cloud bills. Employing second-order thinking can assist in more effectively managing costs and maximizing the value derived from large language models.

While the excitement around generative AI might cause organizations to use LLMs for classification, it can create unintended environmental and financial consequences. The fear of missing out (FOMO) on the potential of generative AI can inhibit evaluation of the impacts across multiple dimensions. This example demonstrates the importance of second order thinking in the design and use of AI. 


The fear of missing out (FOMO) on GenAI can inhibit second-order thinking


Mitigating AI’s unintended consequences requires intentional evaluation. Is generative AI the best tool for a business need when evaluated across multiple dimensions? Or are there other more effective and efficient options available to get the desired outcomes?

A Framework and Process To Mitigate AI’s Unintended Consequences 

One way to mitigate the impact of unintended consequences is to create a formal framework and process for evaluating second order consequences and subsequent impacts. Organizations then can train people on how to use the framework and process to anticipate potential issues before designing and implementing AI. This doesn’t have to be overly complicated; it can be a simple framework such as the following. 

● Business Goal/Need: A description of the business goal or need where you plan to use AI. 

This is the desired first order outcome. It could be increasing revenue through better cross-selling or reducing carbon footprint with more sustainable sourcing.

● Business Challenges: What are the business obstacles to fulfilling the goal or need?

For example, this might be an incomplete view of what products customers own, or difficulty identifying low carbon materials.

● Technical Challenges: What are the technical challenges that contribute to the business challenges?

They could be siloed customer and product data, or different formats and definitions of material data in systems. 

● General Technical Solution: What functionality is needed to solve the technical challenges?

It could be consolidating and connecting customer and product data, or standardizing material carbon footprint classification. 

● AI Capabilities: What AI capabilities can provide the technical solution? 

These might include supervised learning classifiers or unsupervised clustering techniques for master data entity matching; or generative adversarial networks (GANs) and variational autoencoders (VAEs) for master data semantic mapping. They also might include Naïve Bayes techniques for material classification, or GAN and VAE for ensuring adherence to standardized definitions and classifications for materials.

● Second Order Outcomes: What potential unintended consequences might arise from using different AI capabilities?

This requires evaluation across multiple dimensions, including financial, operational, stakeholder (customers, employees, partners), societal, environmental, and ethics. For example, GANs and VAEs will have a greater climate impact than unsupervised clustering and naïve bayes techniques. Organizations should define the techniques or combination of techniques that will fulfill the first order outcome while mitigating unwanted second order outcomes.

● Variables: What variables might influence second order outcomes?

These might include the number of data sources; the volume of customer, product, and material data in those sources; and the quality and consistency of the data across sources. These will all influence how much processing power and energy is required, and in turn influence the second order climate impact.

● KPIs: For each second order outcome, what metrics can be used to monitor and manage impact?

These could be environmental metrics such as carbon footprint, operational metrics such as energy consumption, or stakeholder metrics such as customer trust. You’ll need to define your specific metrics. The key point is to define the metrics you’ll use to identify and mitigate AI’s unintended consequences.

Anticipating potential consequences across dimensions and timeframes early in the AI design and implementation process empowers you to innovate with confidence. It helps guide your model design and feature selection, and helps you establish guidelines for compliant and ethical use. 

Creating a framework/process to mitigate AI's unintended consequences is crucial for maximizing AI’s positive impact. It also demonstrates a commitment to responsible and trustworthy AI practices that can create competitive advantage by increasing employee adoption and enhancing customer trust.

The suggestions presented here are not all-encompassing; rather, they are intended to stimulate thinking on how to mitigate AI’s unintended consequences. I welcome your perspectives and insights on this subject.

Dan Everett

Dan Everett is the owner of Insightful Research, with more than 25 years of experience in data and analytics spanning strategy, governance, management, and culture. His passion is helping...

More About Dan Everett