How Do We Make Machine Learning More Aligned with Human Values?
Cutting-edge algorithms and new research will continue to drive the advancement of machine learning. However, there’s a more straightforward way of resolving many challenges in machine learning today, especially when it comes to ethics and better alignment with human values. And it is not surprisingly, not focused on the technology, but rather on the people using it.
For advanced AI and machine learning systems already in production, the focus is on delivering the intended value of the system, which is no longer a question of leveling up the technology or mathematical techniques behind it. Value-oriented approaches can be supported holistically by both the human expertise involved in creating AI and by the technology itself.
Traditionally, machine learning is not reliant on human intuition and how humans approach processes. That is by design. Machine learning algorithms instead are designed to pick up patterns in the data, oftentimes without many baked-in constraints or assumptions about underlying relationships within a dataset. Pattern recognition is the greatest strength of these algorithms, but at the same time, it is a potential weakness too. Machine learning algorithms may blindly exploit corners of the data that are not helpful for the real-life application, in service of maximizing accuracy on your training data. This leads to overfitting.
For example, in an insurance model predicting risk, the model may incorrectly learn a relationship between the number of DUIs and the risk due to data sparsity. As few historical records will have greater than three DUIs, the model may begin to predict that risk goes down for the fourth or fifth DUI. This discrepancy demonstrates that subject matter expertise can lead to a model with a truer understanding of the underlying process, even if there is insufficient or contrary evidence to support it.
In this real-world example, the underlying relationship between these two variables is known, yet the reason we turn to machine learning in the first place is to surpass the complexity of our existing knowledge of a process.
Machine learning, however, is entirely shaped by what is present in the data. That is to say, human input into data selection can also result in limitations that would not have existed by casting a wider net. Some of their inherent fallibility is tied to which data fields are chosen and engineered, how many examples are given, and how they were sampled. Or it could be related to integrity issues in how humans have recorded, converted, aggregated, and engineered the data.
This is where the concept of humility in AI comes into play. It means having a systemic, qualified, and actionable understanding of these potential areas for weakness, even in a model that is built on the best available data with robust practices. As it stands today, many business processes already augment human decision-making with insights from AI systems.
For example, look at any case in which the AI is not automating a decision but is just one feed of information to a human decision-maker, such as a doctor determining a diagnosis. One avenue for technology that will support more humility in AI is to build tools that enable human intelligence to augment automated AI decision-making in real-time, and not just in model design and validation. At DataRobot, our recent Humble AI feature is doing just that.
Together, both AI-augmented human intelligence and human-augmented AI intelligence leverage the strengths of each other to create better systems that are more effective at delivering value.
Applied Data Scientist, DataRobot
Sarah is an Applied Data Scientist on the Trusted AI team at DataRobot. Her work focuses on the ethical use of AI, particularly the creation of tools, frameworks, and approaches to support responsible but pragmatic AI stewardship, and the advancement of thought leadership and education on AI ethics.
We will contact you shortly
We’re almost there! These are the next steps:
- Look out for an email from DataRobot with a subject line: Your Subscription Confirmation.
- Click the confirmation link to approve your consent.
- Done! You have now opted to receive communications about DataRobot’s products and services.
Didn’t receive the email? Please make sure to check your spam or junk folders.
Accelerate Your AI Journey with the DataRobot Partner EcosystemMarch 28, 2023· 3 min read
How MLOps Enables Machine Learning Production at ScaleMarch 23, 2023· 4 min read
How the DataRobot AI Platform Is Delivering Value-Driven AIMarch 16, 2023· 4 min read
As machine learning and artificial intelligence (AI) usher in the Fourth Industrial Revolution, it seems like everyone wants to get in on the action. And who can blame them? AI promises improved accuracy, speed, scalability, personalization, consistency, and clarity in every area of business. With all those benefits, why are some businesses hesitating to move forward? On the one hand,…
At DataRobot, we believe humility is a key step in building more trustworthy AI models. What do we mean by humility in AI? We mean that even a model built according to data science best practices may still have particular areas of weakness or vulnerabilities when confronted with new data to score. For example, that data may have quality issues,…
At DataRobot, we strongly believe AI is a force for good. From the way deep learning algorithms can create art and writing to its applications for health and medicine, as well as astonishing head-to-head match-ups between AIs and humans in games, AI has made enormous strides towards imitating human behavior. At the same time, we recognize that AI is a…