AI Impact Statements blog image BG v.1.0

AI Impact Statements – Empathy, Imperfection, and Responsibility

April 11, 2022
by
· 5 min read

If you follow the media stories about AI, you will see two schools of thought. One school is utopian, proclaiming the amazing power of AI, from predicting quantum electron paths to driving a race car like a champion. The other school is dystopian, scaring us with crisis-ridden stories that range from how AI could bring about the end of privacy to self-driving cars that almost immediately crash. One school of thought is outraged by imperfection, while the other lives in denial.

But neither extreme view accurately represents our imperfect world. As Stephen Hawking said, “One of the basic rules of the universe is that nothing is perfect. Perfection simply doesn’t exist. . . .Without imperfection, neither you nor I would exist.”

Just as people are imperfect, the AI systems we create are imperfect too. But that doesn’t mean we should live in denial or give up. There is a third option: we should accept the existence of imperfect AI systems but create a governance plan to actively manage their impact upon stakeholders. Three key dimensions of governance and AI impact are empathy, imperfection, and responsibility. 

Empathy

Empathy is the ability to understand and share the feelings of another. It is closely related to theory of mind, the capacity to understand other people by ascribing mental states to them. In the context of AI impact statements, empathy is important for developing an understanding of the different needs and expectations of each stakeholder and the potential harms that could be caused to them by an AI system.

It is an intrinsically human task to get into the minds of each stakeholder to feel empathy. Humans possess mirror neurons, a type of neuron that fires both when that human acts and when the person observes the same action performed by another. Thus, the neuron “mirrors” the behavior of the other, as though the observer were itself acting. Such neurons have been directly observed in humans, primates, and birds.

However, it is also intrinsically human to have cognitive biases that interfere with our ability to develop theory of mind and our ability to assess risk and the consequences of decisions. Some of the cognitive biases that apply in the AI impact assessment process include attention bias, availability heuristic, confirmation bias, framing effect, hindsight bias, and algorithm aversion. For example, the availability heuristic may limit our ability to imagine the full range of potential harms from an AI credit assessment system. We may easily imagine the harm of having a loan application unfairly rejected, but what about the harm of being granted an unaffordable loan or the harm of inaccessibility to the system for people without access to the internet, or with language barriers or visual impairments. 

Each stakeholder group is different, with different expectations and different harms. For this reason, it is best practice to consult with and involve the diverse range of stakeholders affected by the system. An AI impact assessment will carefully document the harms for each stakeholder.

Imperfection

EY warns that your AI “can malfunction, be deliberately or accidentally corrupted and even adopt human biases. These failures have profound ramifications for security, decision-making and credibility, and may lead to costly litigation, reputational damage, customer revolt, reduced profitability and regulatory scrutiny.”

As Murphy’s law says, “Anything that can go wrong will go wrong.” Nothing is perfect. System failures will occur, and there will be consequential harm. At some point in time, an AI system will cause unintended harm and/or undeserved harm.

Unintended harm is caused when an AI system behaves differently to specifications and inconsistently with its intended goal or purpose. Just as for any other software system, unintended harm may be due to software bugs, hardware or network failures, misspecification of the requirements, incorrect data, privacy breaches, or actions by malicious players. In addition to the standard software risks, an AI system may cause unintended harm when a machine learning algorithm learns wrong behaviors from its training data.

Undeserved harm occurs when an AI system makes a decision but the actual outcome is different from what the system predicted. As an old Danish proverb says, “Prediction is difficult, especially when dealing with the future.” Without perfect knowledge, it is impossible to make perfect decisions. Even the most advanced AI systems cannot perfectly predict the future. If they could, data scientists would be able to predict next week’s winning lottery numbers!

Another cause of undeserved harm is competing stakeholder needs. The basic economic problem is that human wants are constant and infinite, but the resources to satisfy them are finite. A design decision that maximizes value for one stakeholder may be at the expense of another. Similarly, a design decision that minimizes undeserved harm for one stakeholder may increase undeserved harm for another.

You cannot avoid imperfection, but you can minimize the likelihood and consequences of unintended harms, and you can ethically balance the competing interests of stakeholders.

Responsibility

Humans must take responsibility for the governance, behaviors, and harms of their AI systems.

An AI system is just a type of computer system, a tool to be used by humans. It is designed by humans, built by humans, managed by humans, with the objective to serve human goals. At no point in this process can the AI system get to choose its own goals or make decisions without human governance. 

When documenting the requirements of the system, describe the potential harms to stakeholders and document your justification of the priorities and trade-offs that must be made between different stakeholders’ interests and values. Explain why certain design decisions are reasonable or unreasonable, including fairness, harmful use of unjustified input data features, harmful use of protected features, loss of privacy, and other undeserved harms.

Your documentation should describe how the AI system contributes to human values and human rights. These values and rights will include honesty, equality, freedom, human dignity, freedom of speech, privacy, education, employment, equal opportunity, and safety.

Build and design for failure tolerance, with risk management controls that mitigate potential errors and failures in the design, build, and execution of the system. Assign ownership of each risk to an appropriate employee or team, with clearly defined processes for risk mitigation and response to potentially harmful events.

Conclusion

AI impact assessments are more than black and white compliance documents. They are a human-centric approach to the risk management of AI systems. Human empathy is essential for understanding the needs of and the harms to different stakeholders. And human judgment, values, and common sense are essential for balancing conflicting stakeholder requirements.

But software tools still have their place. Look for MLDev and MLOps tools with:

  • Guardrails that prevent and flag risky design decisions
  • Model transparency and explainability insights for validating system behaviors
  • Pro-active alerts about ongoing system health
  • Humble AI for elegant handling of handling errors and high-risk scoring data
Demo
See DataRobot AI Platform in Action
Request a demo
About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog