AI Ethics: Building Trust by Following Ethical Practices (Part 2)

July 25, 2019
by
· 3 min read

In our first blog post on the topic of AI Ethics, we covered the promise that artificial intelligence (AI) holds to improve the speed, accuracy, and operations of businesses across a range of industries. With the potential of AI, it’s hard to believe that businesses are hesitating to move forward with AI projects, but fear holds people back. They fear making mistakes that could damage their company’s reputation or doing something illegal or unethical.

Many of these pitfalls can be avoided by following the four main principles that govern ethics around AI. In part one, we covered the first two main principles. In this blog, we’ll take a look at principles three and four, Disclosure and Governance.

  1. Principle 1: Ethical Purpose
  2. Principle 2: Fairness
  3. Principle 3: Disclosure
  4. Principle 4: Governance

Principle 3: Disclosure

One of the four fundamental principles of ethics is respect for autonomy. This means respecting the autonomy of other persons and respecting the decisions made by other people concerning their own lives. Applying this to AI ethics, we have a duty to disclose to stakeholders about their interactions with an AI so that they can make informed decisions.

In other words, AI systems should not represent themselves as humans to users. Where practical, give the choice to opt out of interacting with an AI.

Whenever an AI’s decision has a significant impact on people’s lives, it should be possible for them to demand a suitable explanation of the AI’s decision-making process in human-friendly language and at a level tailored to the knowledge and expertise of the person. In some regulatory domains this is a legal requirement, such as the EU’s General Data Protection Regulation (GDPR) “right to explanation” and the “adverse action” disclosure requirements in the Fair Credit Reporting Act (FCRA) in the U.S.

Principle 4: Governance

An organization’s governance of AI refers to its duty to ensure that its AI systems are secure, reliable and robust and that appropriate processes are in place to ensure responsibility and accountability for those AI systems.

Like any other technology, AI can be used for ethical or unethical purposes, and AI can be secure or dangerous. With the possibility of negative outcomes from AI failures comes the obligation to manage AIs and to apply high standards of governance and risk management.

Humans must be responsible and accountable for the AIs they design and deploy. The comparative advantage of humans over computers in the areas of general knowledge, common sense, context, and ethical values means that the combination of humans plus AIs will deliver better results than AIs on their own.

Learn More

For the full list of principles on how to implement ethical AI practices, download our white paper, AI Ethics. This paper also covers how to develop an AI Ethics Statement that will apply to all projects and how DataRobot’s automated machine learning platform can be a valuable tool to implement ethical AIs.

New call-to-action

About the Author:

Colin Priest is the Sr. Director of Product Marketing for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research. 

About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog