Impact

Impact as a Dimension of Trusted AI

It’s vital to take the context of a use case into account. Find out how to assess impact across stakeholders and build AI that reflects your values.


Assessing the Impact of AI

The challenge with AI is to think ahead and systematically identify the desired behavior of the system that would reflect your values across all dimensions and then plan proactive steps to guarantee it. For this analysis, an impact assessment is a powerful tool for your organization.


In What Ways Can an AI System Reflect My Values?

There is no universally agreed upon abstract ethical standard that is sufficiently detailed to guide your design and implementation of an AI system—nor is there likely ever to be one. Instead, although adherence to some major principles will be required by federal laws, industry regulations, or known social conventions, your team and your company will have to make other judgments as a reflection of what values you hold to be true. 

From one perspective, AI systems are easier than people. When you hire for an open position at your company, you look for someone you think will be a strong fit for your company culture and will model the standards you expect out of your coworkers and employees. It’s not always easy to perceive these traits in someone you’re just meeting.

What Does an Impact Assessment of a Model Require?

An impact assessment is a collaborative process bringing to the table representatives of all stakeholders in an AI system. A system’s stakeholders are likely more than just your data science team and the end business users that are the model consumers. As mentioned earlier, for matters relating to compliance, security, and privacy, you might additionally want resources from legal and InfoSec involved in the conversation. Additionally, stakeholders include those people impacted by a model who might be employees, customers, or patients. An impact assessment should recognize where appropriate the diversity of that body of individuals and how the model might impact different communities in different ways. 

An impact assessment should consider the end-to-end modeling process, from development to implementation and use of a system. In that process, particular concerns are likely to relate to accuracy, bias and fairness, privacy and security, and practices around the disclosure of the use of an AI system and the consumer’s ability to question or inquire about a decision. A robust risk and mitigation framework, tailored to recognize the distinct needs and vulnerabilities of different stakeholders, is also pivotal and instructive to developing the path to productionalization for a model.

When Should I Do an Impact Assessment?

The first time you conduct an impact assessment should be before modeling begins, as initial data sources are identified and evaluated. However, it is a valuable tool to revisit at different junctures in the process as the deployed model is monitored and real decisions are made: for example, model evaluation, productionalization, and ongoing.


Start Delivering Trusted and Ethical AI Now