MIT EthicsConsulting 1920x600

AI Ethics Consulting; Asking for More Than Advice

June 3, 2022
by
· 5 min read

It feels like a lot of AI consulting these days is like the technology itself, more promise than payoff. In her book The Business of Consulting, Elaine Blech shares a joke about consulting’s reputation, where a consultant is asked the time by a client. The consultant in turn asks for the client’s watch and says, “Before I give you my opinion, perhaps you could tell me what time you think it is.” Which would be funny if the stakes for failure are not only financially risky, but ethically dangerous.

DataRobot conducted a study on AI Bias in the fall of 2021 and the results surrounding third-party ethics consulting showed conclusively that companies are relying increasingly on third-party consultants to help them make ethical decisions around AI. 47% of companies use third-party AI bias experts or consultants to define if bias exists in their algorithms or datasets, 38% of companies use third-party AI consultants to help prevent AI Bias and 80% of companies use an external advisory firm to conduct modeling audits.

The motivations for why a business might want third party help when it comes to AI ethics consulting comes down to the fact the ethics consult has come to reflect both the increasing complexity of AI’s ethical dilemmas, and our discomfort with the prospect of answering them alone. Ethics consulting, which developed in other industries, like medicine, in the 80s, is now commonplace. Medical ethics relies on standards and credentialing, and yet how medical organizations evaluate the outcomes continues to be controversial.

Sharing the tough decisions

At its core, consulting satisfies a desire to share the responsibility for harms that result from unexamined technological development.

This is where third party AI ethics has yet to hit its stride. No one has satisfactorily addressed the concerns of authority and responsibility, and creating ethics committees or internal ethics compliance programs as a proxy doesn’t really answer what an ethics committee would actually do on a day-to-day basis, besides create a divergence of governance which in turn could result in a bottleneck of decision-making at a time when the technology and its measures and metrics are in a state of rapid evolution.

There are those who eschew shared responsibility of decision-making in place of tools, platforms, or frameworks aiming to ensure explainability, fairness, and accountability in AI systems. Forrester calls this Responsible AI (RAI). In the fall of 2020 they published an overview of the ethical AI consulting landscape titled, “New Tech: Responsible AI Solutions,” that claimed the “Responsible AI Market is Showing Promise” while showing that third party risk evaluation was still low across all functionality segments. Each vendor’s technical solution had varying capabilities, from offers of “explainability engines” to platforms that purport to work on top of any existing model development functionality.

Asking what we want from an ethicist

What are we really seeking when we engage an ethics expert? Can they be replaced by a technical solution to an ethics problem that perhaps should not be automated? Tools themselves cannot solve the problem of context. That is the perspective of an ethics consultancy like ORCAA, an algorithm auditing company started by Kathy O’Neil, who has been an author and independent and outspoken data science consultant since 2012. For her, the aspects of ethics consulting are pretty straightforward: Ask yourself for whom does the system fail? Is your data even collected toward that end?

Ted Kwartler, Field CTO at DataRobot sees a middle ground. “It’s a long standing business question regarding building capabilities internally, or using (usually) expensive consultants to accomplish a task much faster. The truth of the matter is that ML/AI is becoming business critical, making speed to production more important as well.” It would be understandable to believe one straightforward solution would be to buy consultants in order to realize ethical AI value faster with their existing frameworks and technology, and as a result avoid disruption by competitors.

Kwartler cautions this is actually short-sighted.

The true issue lies with selecting both the software and people. Pure management consultants may not have any particular AI technology and instead be working in Julia, R, Python in Jupyter Notebooks and seek to deploy these models in a number of methods depending on the consultant that wrote the code. Technology variance can increase systemic risk much in the same way that airlines, like Southwest, often have a fleet of aircraft from a single manufacturer, like Boeing. It makes maintenance and monitoring standardized and well understood by personnel.
Ted Kwalter DataRobot
Ted Kwartler

Field CTO, DataRobot

His advice? When choosing a consulting partner, make sure they know their technology.

If using consultants, why inherit their diffuse technology headaches? Large enterprises can’t have multiple tech stacks or they invite confusion among quality and audit responsibilities, resulting in slowed implementation. Likewise, if you’re confident about the tech, then the question becomes, who actually becomes responsible for scoping, building, and deploying your models?

Ethics is not a generic catch-all

Selecting a technology partner means vetting their actual expertise. If you truly believe you can’t afford the uphill slog that is developing the right resources in-house in response to specific mission directives and against the organization’s ethical values, then ask a few hard hitting questions like,

  • How many models has your chosen vendor put into production?
  • What’s the simplest model they ever deployed? (since simpler is often better)
  • How did they recover from a project that had unexpected data problems?
  • Where is their specialty within the business landscape?

Probing questions that go beyond technical solutionism will identify the notebook-only prototyping data scientist from the people that can share the horror stories and use that experience to overcome the inevitable hiccups that derail production deployments.  If organizations don’t own their AI process, which is foundational to a business operation, they don’t actually own the operation itself.

This relates to another type of ethics partner that doesn’t want to endlessly consult, but instead wants to teach, and then who is still around to provide decision-making tools and advice as a backstop when needed. This “ethics as a service” aspect provides access to a network of interdisciplinary ethics professionals who specialize in your specific situation, say in law and policy, or data use and privacy. Will Griffin, Chief Ethics Officer of Hypergiant defines EaaS as an aid to the critical thinking and risk mitigation steps necessary to blend “…ethics from the hearts and minds of designers and developers into work flows, and ultimately into the AI products released into society.”

There are other flavors of academically-oriented AI ethics consulting delivered as opinion pieces or critical reviews of scientific literature, that focus on a singular ethical question, or even a single tech sector challenge, like bias in large language models. This kind of analysis lends itself best to change management initiatives or investment decisions that have an ethical impact and become critical when adopting a new function, service, or product shift.

Ultimately, every organization understands their own needs and budgets best, and should be congratulated for elevating ethics into the mix rather than scrambling for legal advice after their model has been released into the wild. Looking to technology partners that unify their technology needs while also providing practical expertise with a proven track record of deployment will move them toward a solution that works best for both the business and the communities they serve.

White Paper
State of AI Bias
Download now
About the author
DataRobot

Value-Driven AI

DataRobot is the leader in Value-Driven AI – a unique and collaborative approach to AI that combines our open AI platform, deep AI expertise and broad use-case implementation to improve how customers run, grow and optimize their business. The DataRobot AI Platform is the only complete AI lifecycle platform that interoperates with your existing investments in data, applications and business processes, and can be deployed on-prem or in any cloud environment. DataRobot and our partners have a decade of world-class AI expertise collaborating with AI teams (data scientists, business and IT), removing common blockers and developing best practices to successfully navigate projects that result in faster time to value, increased revenue and reduced costs. DataRobot customers include 40% of the Fortune 50, 8 of top 10 US banks, 7 of the top 10 pharmaceutical companies, 7 of the top 10 telcos, 5 of top 10 global manufacturers.

Meet DataRobot
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog