Senior Product Manager, DataRobot
Natalie Bucklin is the Senior Product Manager of Trusted and Explainable AI. She is passionate about ensuring trust and transparency in AI systems. In addition to her role at DataRobot, Natalie serves on the Board of Directors for a local nonprofit in her home of Washington, DC. Prior to joining DataRobot, she was a manager for IBM’s Advanced Analytics practice. Natalie holds a MS from Carnegie Mellon University.
Posts by Natalie Bucklin
With Bias Mitigation, you can make your models behave more fairly towards a feature of your choosing, which we’ll review in this post.
DataRobot offers end-to-end explainability to make sure models are transparent at all stages of their lifecycle. In this post, we’ll walk you through DataRobot’s Explainable AI features in both our AutoML and MLOps products and use them to evaluate a model both pre- and post-deployment.
Subscribe to our Blog
Get regular updates on data science, artificial intelligence, machine learning, and more.
We will contact you shortly
We’re almost there! These are the next steps:
- Look out for an email from DataRobot with a subject line: Your Subscription Confirmation.
- Click the confirmation link to approve your consent.
- Done! You have now opted to receive communications about DataRobot’s products and services.
Didn’t receive the email? Please make sure to check your spam or junk folders.
This is the final post in a three part series that describes what is needed for companies to properly govern and ultimately trust their AI systems. This article will discuss the technologies DataRobot uses to help ensure trust in the AI systems built on our platform
Evaluating bias is an important part of developing a model. Deploying a model that’s biased can lead to unfair outcomes for individuals and repercussions for organizations. DataRobot offers robust tools to test if your models are behaving in a biased manner and diagnose the root cause of biased behavior. However, this is only part of the story. Just because your model was bias-free at the time of training doesn’t mean biased behavior won’t emerge over time.
As AI has become more ubiquitous, we’ve started to see increasingly more examples of AI behaving badly. Recent high-profile examples include hiring models that are biased against women and facial recognition systems that can’t identify black individuals. At DataRobot, we want to ensure that users have the right tools at their disposal to investigate bias. To meet that goal, we…
Every day, millions of people interact with AI systems, often without knowing it. Whether it’s used to make a product or other recommendation, apply for a loan, or filter spam from your inbox, AI is changing the world. At DataRobot, we believe in empowering users to easily create powerful AI tools that have the potential to transform their businesses. In…