Humans and AI What Should You Tell Consumers Background

Humans and AI: What Should You Tell Consumers?

July 29, 2021
by
· 6 min read

Have you ever asked a simple question, then unexpectedly received a complex answer? In my experience, this happened most often when dealing with lawyers and academics, but more recently I’ve noticed the same behavior coming from data scientists.

Sometimes I just want a quick and simple explanation!

What should data scientists tell consumers - DataRobot

What was your most hated subject at school? An AP-AOL news poll found that mathematics was the subject that 37% of Americans hated most during their school days, easily ranking as the most hated school subject. People feel anxiety when confronted with mathematics. A search on Amazon will return dozens of self-help books dedicated to overcoming “math anxiety.”

Or is it more of a love-hate relationship? A survey of schoolchildren by toy manufacturer Bandai found that mathematics was chosen as both the most loved and hated subject by Japanese elementary and junior-high-school students. That result resonates with me: while mathematics was my favorite subject, I absolutely hated doing long division!

Even if mathematics was your strongest subject at school, it isn’t always a skill you practice in your daily life as an adult. One OECD report, “Numeracy Practices and Numeracy Among Adults,” reported a “virtuous cycle” between the ongoing use of numeracy in adults and their measured numeracy performance, confirming the “use it or lose it” hypothesis. Unless an adult is working in a technical profession requiring high levels of numeracy, their mathematics skills atrophy.

When providing explanations, it is better to be 100% useful than 100% correct. A pedantic mathematical answer is not what most people are looking for or need.

Why Tell Consumers Anything?

One of the fundamental principles of ethics is respect for autonomy. This means respecting the autonomy of other persons and respecting the decisions made by other people concerning their own lives. Applying this to AI ethics, we have a duty of disclosure to consumers so that they can make informed decisions. 

Whenever an AI’s decision has a significant impact on people’s lives, it should be possible for them to demand a suitable explanation of the AI’s decision-making process in human-friendly language and at a level tailored to the knowledge and expertise of the person. This enables consumers to make informed decisions with regards to the consequences of their behavior and to identify and correct incorrect data upon which a decision had been made. In some regulatory domains this is a legal requirement, such as the EU’s General Data Protection Regulation (GDPR) “right to explanation” and the “adverse action” disclosure requirements in the Fair Credit Reporting Act (FCRA) in the U.S.

Your brand value is a function of consumer trust. Neuroscience tells us that when humans choose to trust each other, they do so based upon perceived alignment of interests and whether a person’s behavior is intuitive and predictable. The same lesson applies to human trust in AI systems. We can improve consumer trust by signalling the purpose and goals and behavior of an AI system.

Your AI marketing is more effective when it communicates reasons to sales prospects. By signaling that they are treated as individuals and by harnessing the nudge behavior of behavioral economics, AI systems that explain their actions can improve sales. 

Signaling Intent

Since trust is based upon aligned interests, it makes sense to tell consumers why you are using AI and how they benefit from using your AI. For example, you might be using AI to ensure that they are treated as individuals or maybe to make your product or service more accessible or affordable.

The research paper “Cooperating with Machines” describes the results of research into how algorithms can “cooperate with people and other algorithms at levels that rival human cooperation.” Prior behavioral science research has shown that humans rely on non-binding signals, connected with actual behavior, to establish cooperative relationships. The researchers wanted to test whether a similar strategy could work for AI systems wanting to cooperate with humans. The challenge was that the trained behaviors of AI systems are not easily understood by the humans interacting with them. They compared the performance of two AI systems, both identical except that one included a simple explanation of what it intended to do. That is, they compared a secretive black-box AI to an AI that proactively and intuitively signalled its intentions.

The proactive AI system was able to double the level of mutual cooperation it achieved versus the black-box algorithm. Its ability to achieve cooperation was as effective as that of humans. In fact, some human players were unable to distinguish the communicative AI system from humans!

Keep the Message Simple

The key to successful disclosure is to keep the details simple and intuitive, suited to the intended audience and minimizing cognitive load.

In the 2016 paper “How Much Information? Effects of Transparency on Trust in an Algorithmic Interface,” researcher René Kizilcec reports the results of an experiment in which students were provided with one of three levels of transparency about how their final grade was calculated:

  • Low Transparency: Students received only the computed grade.
  • Medium Transparency: Students received the computed grade and a paragraph explaining how the grade had been calculated and why adjustments had been made and naming the type of algorithm used.
  • High Transparency: Students received the computed grade and a paragraph explaining how the grade had been calculated and why adjustments had been made and naming the type of algorithm used. These students then also received their raw peer-graded scores and saw how these scores were each precisely adjusted by the algorithm to arrive at the final grade.

Students in each of the three groups were asked to rate their trust in the process. Using the data from the study, Kizilcec arrived at three key conclusions:

  • Individuals whose expectations were not met (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation
  • However, providing too much information (the High Transparency option in the experiment) eroded this trust
  • Attitudes of individuals whose expectations were met did not vary with transparency

Research into the effectiveness of communicating science to non-expert audiences has similar conclusions. Communication of abstract facts has been shown to be significantly less effective than a contextual narrative in achieving understanding, recall, and acceptance.

Optimal trust was achieved via moderate transparency, keeping the message simple.

Answering the Right Questions

Effective disclosure will have a narrative that addresses the needs of the audience, in this case consumers. And consumer questions about an AI can usually be classified into one of three basic questions:

  • Is the decision or action correct?
  • Is the decision or action fair?
  • How can I influence the outcome?

Although it is not best practice to provide the full details of how an algorithm works to consumers, we can help answer these questions by providing simple and intuitive explanations. By showing the most important data values that led to a decision, you empower the consumer to check the validity of the data you held about them and apply intuition to whether the outcome makes sense. There is the added benefit that you are showing them whether protected or sensitive attributes, or proxies for these attributes, have influenced the outcome. Finally, by showing which data values have the strongest influence on outcomes, you are indicating that they have autonomy. By changing those attributes and behaviors that are in their power, consumers can achieve better outcomes.

Provide information that proactively answers the questions in the mind of the consumer.

Humans and AI Best Practices

As with any new technology, people are unsure of what to expect. But research shows the value of signaling to consumers, using clear and simple communication to let them know what to expect, the purpose and goals of the AI system, and why a decision was made. Don’t be secretive, but also don’t overwhelm consumers with details.
Black-box algorithms, those algorithms that don’t communicate their intent or explain their decisions, are obsolete. Anticipate consumer questions by downloading your free copy of A Consumers’ Guide to How to Question an Algorithmic Decision.

Free guide
A Consumers’ Guide to How to Question an Algorithmic Decision
Download
About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog