AI Ethics: Building Trust by Following Ethical Practices

July 10, 2019
by
· 3 min read

As machine learning and artificial intelligence (AI) usher in the Fourth Industrial Revolution, it seems like everyone wants to get in on the action. And who can blame them? AI promises improved accuracy, speed, scalability, personalization, consistency, and clarity in every area of business. With all those benefits, why are some businesses hesitating to move forward? 

On the one hand, businesses know that they need to embrace AI innovation to remain competitive. On the other hand, they know that AI can be challenging. Most everyone has heard news stories of high profile companies making mistakes with AI, and they are worried that it may happen to them too, damaging their reputation. In regulated industries, there’s the question of how to explain AI decisions to regulators and customers. Then there’s the challenge of how to engage with staff so that they can embrace organizational change. 

How do you manage AI to ensure that it follows your business rules and core values, while reaping the most benefits? It’s all about building trust in AI. 

Let’s take a look at the four main principles that govern ethics around AI and how these can help build trust.

  1. Principle 1: Ethical Purpose
  2. Principle 2: Fairness
  3. Principle 3: Disclosure
  4. Principle 4: Governance

 

Principle 1: Ethical Purpose

Just like humans, AIs are subject to perverse incentives, maybe even more so than humans. So, it stands to reason that you need to choose carefully the tasks and objectives, as well as the historical data, that you assign to AIs.

When assigning a task to an AI, consider asking questions such as: Does the AI free up your staff to take on more fulfilling human tasks? Does your new AI task improve customer experience? Does it allow you to offer a better product or expand your organization’s capabilities? 

In addition, there is more to this than merely considering the impacts upon your organization’s internal business goals. Consider the negative externalities, the costs suffered by third parties as a result of the AI’s actions. Pay particular attention to situations involving vulnerable groups, such as persons with disabilities, children, minorities, or to situations with asymmetries of power or information.

 

Principle 2: Fairness 

Most countries around the world have laws protecting against some forms of discrimination, including everything from race and ethnicity to gender, disability, age, and marital status. It goes without saying that companies need to obey the law with regard to protected attributes. But beyond that, it is also good business practice to safeguard certain sensitive attributes, such as where there is an asymmetry of power or information.  

If the historical data contains examples of poor outcomes for disadvantaged groups, then an AI will learn to replicate decisions that lead to those poor outcomes. Data should reflect the diversity of the target population with which the AI will be interacting. Bias can also occur when a group is underrepresented in the historical data. If the AI isn’t given enough examples of each type of person, then it can’t be expected to learn what to do with each group.

The good news is that with AIs, it is easier to detect and remove bias than with humans. Since an AI will behave the same way every time it sees the same data, you can run experiments and diagnostics to discover AI bias.

 

Learn More

For the full list of principles on how to implement ethical AI practices, download our white paper, AI Ethics. This paper also covers how to develop an AI Ethics Statement that will apply to all projects and how DataRobot’s automated machine learning platform can be a valuable tool to implement ethical AIs.

 

New call-to-action

 

About the Author:

Colin Priest is the Sr. Director of Product Marketing for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research. 

 

About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog