DataRobot Humans and AI How Should You Talk About AI Be Positive or Give Warnings background image v1.0

Humans and AI: How Should You Talk About AI? Be Positive or Give Warnings?

August 5, 2021
by
· 4 min read

There’s a saying, “If you can’t say something nice, don’t say anything at all.” Is there too much hype about AI or too much doomsaying?

dreamstime m 182973817

AI Hype

In 2019, Utah struck a deal with Banjo, a threat detection firm selling AI services to process live traffic feeds, dispatch logs, and other data. Banjo claimed to use software that automatically detected anomalies to help law enforcement solve crimes and respond faster. But things didn’t go as planned. After Banjo CEO Damien Patton was exposed as a member of the Ku Klux Klan, including involvement in an anti-Semitic drive-by shooting, the state put the contract on hold and called in the state auditor to check for algorithmic bias and privacy risks in the software. The auditor’s report contained both good news and bad news. The good news was the software posed less risk to privacy than suspected. The bad news was that the risk was low because Banjo didn’t actually use techniques “that meet the industry definition of artificial intelligence.”

AI Doomsaying

Elon Musk has repeatedly warned that when it comes to AI, we should all be scared because humanity’s very existence is at stake. In a July 2020 interview with the New York Times, Musk expressed his opinion that London research lab DeepMind is a “top concern” when it comes to artificial intelligence. The DeepMind research lab is best known for developing AI systems that can play complex games, such as chess and Go, better than any human. “Just the nature of the AI that they’re building is one that crushes all humans at all games,” Musk said. “I mean, it’s basically the plotline in War Games.” While Musk is often quoted for his AI doomsaying, it should be noted that his opinions about AI are not always negative. Musk was an original investor in DeepMind and OpenAI, and his Tesla and SpaceX products heavily rely on AI automation.

Selling AI

Are AI sales and marketing teams contributing to AI hype? Research shows that positive emotions are vital for sales effectiveness. Positive emotions can expand our behavioral repertoires and heighten intuition and creativity, and emotions can be contagious. And science shows that people need to exceed a tipping point 2.9013 ratio of positive to negative emotions for overall well-being. However, there is also an upper limit to positive emotions. Once the ratio of positive to negative emotions exceeds 11, positive emotions begin doing more harm than good.

It is tempting to believe that sales and marketing professionals should solely focus on communicating the positives of their products. But in the study “When Blemishing Leads to Blossoming: The Positive Effect of Negative Information,” researchers uncovered a counterintuitive effect of negative information. They showed that under certain conditions, people are more favorably disposed to a product when a small dose of negative information is added to an otherwise positive description. A small dose of negative information—a small blemish added to an otherwise positive description—yields favorable results. In behavioral science this is known as the blemish frame, where a small negative provides a frame of comparison to much stronger positives, strengthening the positive messaging.

AI and Uncertainty

People are unsure about AI because it’s new. Some people react to the uncertainty with fear and suspicion. Research has shown that people react more negatively to incorrect decisions when they are made by an AI versus a human decision. They set high, almost perfectionist expectations for AI.

Could the blemish-frame effect apply to talking about AI? Recently published research addressed the question of “When Does Uncertainty Matter?: Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making.”

The Harvard researchers set out to explore if and how conveying predictive uncertainty impacts decision-making. Their user study was based on predicting the monthly rental prices of apartments in Cambridge, Massachusetts. Study participants were provided with the floor area and number of bedrooms of each apartment and asked to estimate the rental price. After making their first estimate, all participants were shown rental price predictions from a machine learning model. Some participants were also shown the statistical distribution of the potential rental prices against the prediction point estimate. The researchers measured how the participant’s estimates changed in response to seeing a prediction, with and without uncertainty estimates.

The results demonstrated that showing no predictive uncertainty results in user estimates that are farthest from the model prediction. The researchers concluded that the “results demonstrate that people are more likely to agree with a model prediction when they observe the corresponding uncertainty associated with the prediction” and “uncertainty is an effective tool for persuading humans to agree with model predictions.” This finding held regardless of the characteristics of the uncertainty.

Humans and AI Best Practices

Don’t be tempted to hide the imperfect accuracy of AI systems from stakeholders. Too many data science failures have been due to ignorance of the potential points of failure. The best way to build trust in your AI system is to communicate its imperfections. Educating stakeholders empowers them to make better business decisions, including improved AI governance. Knowledge of predictive uncertainty can inform AI humility rules that limit AI system arrogance, when an AI system doesn’t know what it doesn’t know and doesn’t know its limits.

Demo
AI you can trust
Request a demo
About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog