Register for "What's New in 2025? Everything!" - Thursday, December 12, 2:00 pm EST

Don't Underestimate the Power of Stupid Artificial Intelligence Algorithms

I attended the AI World conference in Boston last month and was excited to see the large number and variety of ways that AI is being used today. Here are the four most intriguing things I heard at the conference.

Stupid AI

One of the most interesting talks addressed the emerging problem of keeping an AI from running amuck and doing something unexpected and unwanted in unforeseen circumstances. For instance, if a self-driving car is trying to pass another car at high speeds and a piece of sheet metal unexpectedly appears in front of the car. The AI might try to be really ‘smart’ and navigate around the obstacle but that might not be the lowest risk thing to do. The simplest (and best) thing might be to just slam on the brakes as hard as possible.

This example highlights the need for a ‘stupid AI’ to act as a co-pilot to the AI and step in to override the AI if something goes seriously wrong. This AI would be ‘stupid’ in the sense that it would be vastly less complex than the AI. It might be just a set of simple rules. It would also be generally understood by a human (whereas an AI utilizing a deep learning neural network would be complex, opaque, and mysterious). This co-pilot would also have the ability to use up to 80% of the capabilities of the vehicle (for example the braking or acceleration systems) while the AI would be required to stay within a more moderate range – say 20% of the limits. Maxing out the braking system at 100% deceleration won’t be comfortable for the driver or the car but it could save a life.

Personally, I wonder if this will really work. There are such complex calculations being performed within the AI that just having another system take over may not be ideal. What if the AI was smart enough to realize that max deceleration on a crowded highway would result in a serious collision from behind? What if the smart AI knew the type of car being driven and had already calculated that a rear-end collision would be more dangerous and costly than a side-swipe collision from a sophisticated evasive maneuver? It will be interesting to see how this balance between AI and ‘stupid AI’ co-pilot eventually shakes out.

Facebook Cares about Human Intent

Ser-Nam Lim, a Research Scientist Manager at Facebook, talked about how Facebook is continuously looking to do a better job at blocking content that runs afoul of its user agreements. In particular, Facebook wants to block and flag provocative statements relating to physical threats such as: “I’m going to beat you…” or bullying: “look at that pig …”. But it must also be careful to not block perfectly innocent language.

For example, one video game player might use the word ‘beat’ with no intent of physical violence. Another Facebook user might literally be talking about a pig with no intent or personal insult. The context of the language and author intent can make all the difference in interpreting the meaning between language that would be flagged and language that would not.

Often the intent of the author can be determined to be malicious or not by taking into account other information surrounding the text – for instance images. Facebook currently uses AI to mine through billions of Instagram images that are posted each day in order to determine author intent.

Lim and his team use AI to predict author intent and interpret word meaning. This field of predicting intent via AI is sure to be a growing one in the coming years and Facebook is using this technology in multiple areas where the intent of the author is to mislead the reader:

Altered with intent to deceive – for example when the image of the president of Mexico was spliced into an online image of someone else’s driver’s license to give support to a fake story intended to affect the Mexican elections.

Image out of context – for example where the same image of an injured child is used in multiple news stories about completely different bombing incidents

False claim – like detecting that a caption claiming the Premier of India was the most corrupt in history when the claim was provably untrue.

Adversarial noise – like when key pixels in an image are transformed so that they appear unaltered to the human eye but are incorrectly recognized by AI filters. For instance, pornographic images are often altered in slight but sophisticated ways so that they pass through adult content filters but are not noticeably changed when viewed by a human.

Multi-Armed Bandit Beats A/B Testing at Uber

A/B testing is an important and ubiquitous technique used in all of data-driven marketing these days. Though sophisticated, A/B testing is really nothing more than the good ole scientific method: make a guess at what might happen, then try it out compared to something different, then rinse … repeat…

A/B testing has been a breakthrough for marketing ROI optimization that has been made possible through AI, big data and the tight connection of the data to the business problem. But it is limited in scale because it requires a human to interpret the effect of the A/B test (the ‘p’ value that represents the likelihood that the change made a difference). And it still requires a human to come up with the next experiment. In other words A/B testing tells you whether “A” or “B” is better but you still need to come up with the original “A” or “B” and what to try next.

A/B testing also has limitations because it assumes that the best answer will be better for some long period of time. In today’s world that right answer may change or at least drift over time. There is a need for something better that requires less human oversight and can adapt to changing marketing conditions.

That next thing after A/B testing is something called the multi-armed bandit method (MAB). In this case many possible solutions or hypotheses are proposed (let’s call them A,B,C,D … Z) and each of them may be better at certain times but there is uncertainty involved. Picking the right solution at the right time becomes similar to the story of the superstitious gambler trying to pick the ‘hot’ slot machine (sometimes affectionately called a “one-armed bandit”) in a casino that will pay out the most winnings. This analogy depends on the fact that there is a difference in the payouts of the slot machines but their behavior is not consistent in the short term.

This more sophisticated view of the problem must now take into account that you never really know which slot machine has a higher payoff value (since it is probabilistic) but if one machine starts winning you’d like to start to put more of your quarters into that machine rather than one that is losing. But at the same time, you may want to keep trying an underperforming slot machine to see if it will get hot.

Uber Eats used the MAB technique and it produced a better lift in less time (5% improvement in 6 weeks rather than 8 weeks) for determining the best subject line for an email campaign. The winning subject lines were those that asked questions or made assumptive conclusions that the consumer already loved the product being marketed (Uber Eats).  Surprisingly the lower performing messages were those that were polite and said ‘Thank You”. The hypothesized reason for this was that saying ‘thank you’ tends to end a conversation and a question tends to initiate active thought.

Using AI in the Emergency Room at Mass General Hospital

Monica Wood is a clinical fellow in radiology at the Massachusetts General Hospital (MGH) where she is employing AI to help patients in a variety of situations. At the hospital, the imaging study volume has been growing linearly since 1988 and there is constant pressure to maximize throughput and volume. At the same time, healthcare in the U.S. has become more concerned about quality as it moves from volume-based care to value-based care. Because of these two effects, there is a growing need to improve the speed and accuracy of the imaging studies being performed at MGH and every other hospital around the country. MGH is experimenting with AI to help to overcome these challenges.

Here are several real-world examples where AI is being used at MGH:

Prioritizing imaging reviews. When a patient comes into the ER and has imaging performed, the imaging eventually makes its way to the radiologist for review. That radiologist often has a long queue of images to look at which includes some that could be quickly evaluated and some that require careful study. Unfortunately, a patient with a scan that could be quickly cleared still has to wait at the bottom of the queue for many hours. AI is being used to set the priority of images being viewed by the radiologist and to make automated decisions where possible.

Optimizing clinical staffing. There are many steps required between when an ER doctor asks for a scan and when it actually gets done with many different professionals involved (e.g. nurse signoff, transport, technologist review, radiologist evaluation etc.). The needs for the staff varies greatly at different times and the hospital is often overstaffed or understaffed. AI is being used to look at the historical staffing data and to predict optimal staffing scheduling.

Predicting image acquisition time. Typically different types of scans are allocated different but consistent amounts of time for the imaging to be performed. For instance, a chest CT might be 15 minutes and a brain CT might be 45 minutes. But there are many other factors that affect the time it takes to complete the imaging (e.g. mobility, patient age, claustrophobia, existing IVs, language interpretation needs etc.). These other factors are not currently used to schedule image acquisition times which results in actual scans taking much less or more time than anticipated. AI technology is building more complex models to better predict how much time to schedule for each individual case. This reduces wait times for patients.

Classifying imaging needs. In an effort to reduce unnecessary imaging, hospitals currently try to be careful in determining whether a CT or MRI is best or even whether imaging is required at all. Doctors take many criteria into consideration when making that recommendation but often patients have other issues such as allergies, claustrophobia, medical devices, or prior scans that affect this decision but are only discovered at the scanner and the patient must be turned away at the last second. With data available in today’s electronic medical record, there is much more information that could be used to better predict the right imaging early in the process. AI is being used at MGH to build models to predict which patients can benefit from which types of scans and also to calculate when the benefits of the scan outweigh the risks. This results in patients getting better scans and more efficient use of time by hospital staff.

Stephen J. Smith

Stephen Smith is a well-respected expert in the fields of data science, predictive analytics and their application in the education, pharmaceutical, healthcare, telecom and finance...

More About Stephen J. Smith

Books by Our Experts