DataRobot How Do We Make Machine Learning More Aligned with Human Values Background V1.0

How Do We Make Machine Learning More Aligned with Human Values?

September 3, 2020
by
· 3 min read

Cutting-edge algorithms and new research will continue to drive the advancement of machine learning. However, there’s a more straightforward way of resolving many challenges in machine learning today, especially when it comes to ethics and better alignment with human values. And it is not surprisingly, not focused on the technology, but rather on the people using it. 

For advanced AI and machine learning systems already in production, the focus is on delivering the intended value of the system, which is no longer a question of leveling up the technology or mathematical techniques behind it. Value-oriented approaches can be supported holistically by both the human expertise involved in creating AI and by the technology itself. 

Traditionally, machine learning is not reliant on human intuition and how humans approach processes. That is by design. Machine learning algorithms instead are designed to pick up patterns in the data, oftentimes without many baked-in constraints or assumptions about underlying relationships within a dataset. Pattern recognition is the greatest strength of these algorithms, but at the same time, it is a potential weakness too. Machine learning algorithms may blindly exploit corners of the data that are not helpful for the real-life application, in service of maximizing accuracy on your training data. This leads to overfitting.

For example, in an insurance model predicting risk, the model may incorrectly learn a relationship between the number of DUIs and the risk due to data sparsity. As few historical records will have greater than three DUIs, the model may begin to predict that risk goes down for the fourth or fifth DUI. This discrepancy demonstrates that subject matter expertise can lead to a model with a truer understanding of the underlying process, even if there is insufficient or contrary evidence to support it. 

Image 2020 09 02 at 6.50.23 PM
DataRobot MLOps: Humility rules prevent faulty predictions from being deployed into production

In this real-world example, the underlying relationship between these two variables is known, yet the reason we turn to machine learning in the first place is to surpass the complexity of our existing knowledge of a process. 

Machine learning, however, is entirely shaped by what is present in the data. That is to say, human input into data selection can also result in limitations that would not have existed by casting a wider net. Some of their inherent fallibility is tied to which data fields are chosen and engineered, how many examples are given, and how they were sampled. Or it could be related to integrity issues in how humans have recorded, converted, aggregated, and engineered the data.

This is where the concept of humility in AI comes into play. It means having a systemic, qualified, and actionable understanding of these potential areas for weakness, even in a model that is built on the best available data with robust practices. As it stands today, many business processes already augment human decision-making with insights from AI systems. 

For example, look at any case in which the AI is not automating a decision but is just one feed of information to a human decision-maker, such as a doctor determining a diagnosis. One avenue for technology that will support more humility in AI is to build tools that enable human intelligence to augment automated AI decision-making in real-time, and not just in model design and validation. At DataRobot, our recent Humble AI feature is doing just that.

Together, both AI-augmented human intelligence and human-augmented AI intelligence leverage the strengths of each other to create better systems that are more effective at delivering value.

Ebook
Humility in AI: Building Trustworthy and Ethical AI Systems
Download now

About the author
Sarah Khatry
Sarah Khatry

Applied Data Scientist, DataRobot

Sarah is an Applied Data Scientist on the Trusted AI team at DataRobot. Her work focuses on the ethical use of AI, particularly the creation of tools, frameworks, and approaches to support responsible but pragmatic AI stewardship, and the advancement of thought leadership and education on AI ethics.

Meet Sarah Khatry
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog