DataRobot Combining Empathy and AI BKG

Combining Empathy and AI

October 7, 2021
by
· 3 min read

Trust in AI must be earned. Ideally, business users or consumers that interact with a model and its output displayed in a dashboard should not need to question its authenticity. Unfortunately, we aren’t there yet, and it’s because there are different components to trust, some we have yet to address. One of these components is empathy. Many individuals  do not fully trust AI due to the lack of  empathy that is instilled into models.  Many are subscribed to the notion that AI cannot possibly understand the nuance and subtlety required to make the best evaluations and decision recommendations. However, the conversation must start somewhere and there is promise that empathy  can coexist within AI systems. 

Dimensions of Trust

As AI continues to evolve, there are more conversations around whether we can, or even should, imbue AI systems with what are considered uniquely human emotions, such as empathy. Pegasystems is a Massachusetts-based software company whose primary focus is Customer Relationship Management and Digital Process Automation, which is powered by advanced AI and robotic automation. To illustrate the need for “empathetic AI,” the company uses the example of a financial institution selling a high-interest loan to a low-income family. For AI, this example seems like a good option for business, but by applying empathy, it is possible to see that this is not the most ethical option. By infusing “empathy” into an AI system, the “next best action” would recommend solutions that “mutually benefit customers and companies”.1 Finding the most ethical option is the key to understanding this concept of Empathetic AI. It is not about teaching machines to feel but rather using AI and rules about ethics and empathy to determine the next best action to take for all parties involved.

In a McKinsey & Company interview with marketing and technology author Minter Dial, he explains that when it comes to combining empathy and AI, “the challenges are very difficult. It’s inevitably a blend; it’s got to be machine plus human working together. It’s going to be a combination of the machine dealing with a lot of the repetitive stuff and the human being coming in to override or complement or make it more empathic when necessary, according to the rules they establish.”2

Affective Versus. Cognitive Empathy

There are two types of empathy. Affective empathy is the ability to share the feelings that others may have. If a friend feels sad, you may feel sad. But the other type of empathy is very different and is the area where AI efforts should be focused. Cognitive empathy is the ability to recognize and understand another’s mental state. By understanding why someone feels the way they do, a decision can be made (by either a human or a machine) that exhibits this type of empathy. Machines can use their massive amounts of memory to inform those decisions. “The key then is the data sets that you’re providing: the learning data set to begin with and then what you try to create for the AI to execute afterwards in empathy.”3  

Empathetic AI Requires Empathetic Teams

AI will only be as empathetic as the teams who teach (or code) it. So to create empathetic AI, you need an ethical construct. But before even having an ethical construct, you need self-awareness about your own levels of empathy. This leads us to questions about how ethics are represented, and the diversity of the teams creating empathetic AI. In the end, according to Minter Dial, “If you want an empathic AI, you better start off with empathy within your organization.”4

Demo
AI you can trust
Request a Demo

1“AI with heart: How Pegasystems is bringing empathy into AI” by Ellen Daniel, Verdict, June 2019.

2Getting the feels: Should AI have empathy? McKinsey & Company Podcast, July 2020.

3 Getting the feels: Should AI have empathy? McKinsey & Company Podcast, July 2020.

4 Getting the feels: Should AI have empathy? McKinsey & Company Podcast, July 2020.

About the author
Scott Reed
Scott Reed

Trusted AI Data Scientist

Scott Reed is a Trusted AI Data Scientist at DataRobot. On the Applied AI Ethics team, his focus is to help enable customers on trust features and sensitive use cases, contribute to product enhancements in the platform, and provide thought leadership on AI Ethics. Prior to DataRobot, he worked as a data scientist at Fannie Mae.  He has a M.S. in Applied Information Technology from George Mason University and a B.A. in International Relations from Bucknell University.

Meet Scott Reed
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog