Zen and the Art of AI Impact Assessments BKG

Zen and the Art of AI Impact Assessments

January 18, 2022
by
· 4 min read

Have you ever walked into a room and forgotten what you were thinking or your purpose in going to that room? Don’t worry, this isn’t a sign of mental deficiency. It is a well-known side-effect of how human memory works. In psychology it is called the doorway effect.

unnamed

There is a physical limit to the number of concurrent ideas humans can hold in our conscious thoughts. This limit on our working memory is low, only four independent ideas at once. Scientists believe that since it’s helpful for survival to prioritize thoughts and attention to what is happening around us, our brains have evolved to free up cognitive resources as we change locations, resetting our memories and thoughts as we walk through a door.

Compounding the doorway effect is cue-dependent forgetting, the failure to recall information without memory cues. When removed from our usual work environment to participate in an AI project, there is the risk that strategic business goals and everyday business rules will be forgotten.

It’s safe to say that most data science fails are inadvertent rather than malicious. A deep dive case study into several high profile AI failures revealed a common narrative: people meant well but they didn’t stop to think what could go wrong. Their lack of AI governance inevitably led to embarrassing failures.

Zen and AI Governance

Have you ever driven a car and realized that you don’t remember how you got to your destination? Sometimes we undertake a task without consciously thinking about it. On the other hand, the practice of zen seeks to enhance conscious observation and deliberate decision-making. The dictionary definition of zen is a state of meditative calm in which one uses direct, intuitive insights as a way of thinking and acting.

Have you ever peered into the cockpit of an aircraft when boarding your flight for your trip and seen the pilots sitting there holding a checklist? The purpose of these checklists is to avoid complacency and to ensure conscious observation of risk management.

AI governance should be proportionate to the risk. AI impact statements are not always necessary. But when there is material risk, it helps to list, then consciously investigate these risks.

The Need For Documentation

While AI projects may seem the domain of data scientists and IT professionals, research shows the vital role of business subject matter experts. In “Winning With AI” in the MIT Sloan Management Review, the authors report that when IT specialists lead AI projects they have half the success rate of projects where business line specialists lead AI projects. This isn’t a criticism of IT specialists, but rather a reminder that AI projects require a broader frame of reference than conventional IT projects, and AI projects are more about business transformation rather than technology.

But when business subject matter experts join an AI project, the experience can be much like stepping through a doorway into a different room. All too often when they leave behind their normal business routine, their cubicle or office, and surround themselves with data specialists talking unintelligible jargon, business subject matter experts are tempted to forget about the business imperative and let it play second fiddle to the technology.

Documenting the business goals, the business processes, and the stakeholders reinstates the memory cues, reminding the entire project team that the AI project is business focused.

Describing an AI System

Start with the why. Before you get into the details of how it will operate, start by documenting the purpose of the AI system. Describe the business goal and the metric used to measure business success. Business goals can include increasing sales, reducing errors, making a process more efficient or fairer, or removing frictions from customer experience. If there is more than one business goal, explain the hierarchy of those goals. Since nothing is perfect, define acceptable tolerances in system accuracy and business value achieved. Explain why the chosen solution will use AI rather than the alternatives, and how the AI system will help to achieve the business goals (i.e., what are the specific benefits expected from using AI versus alternatives).

Next, list the system constraints and the expected behaviors of the system. These will include regulatory requirements, business rules, common-sense heuristics, and ethical values. For example, regulatory rules could include not selling alcohol to minors, while a business rule or common-sense heuristic is that the price you charge for your products or services must not be negative. Relevant ethical values could include fairness or disclosure requirements.

AI requires data for training and for operation. Describe the provenance of the data, who owns the data, its quality, and relevance. An AI system is always part of a business process. Describe the revised business process that uses the AI system, the workflow, how the system will be used, and by who.

Finally, and most importantly, you need to list the people who will be stakeholders in the AI system’s operation and describe how the AI system will benefit them. This list will include, but is not limited to:

  • The organization deploying the AI system
  • Employees
  • Suppliers
  • Consumers
  • Other end-users
  • Society
  • The natural environment

If you’re looking for inspiration for what attributes to document, here are a couple of standard lists for your reading list. Remember that every AI system is unique — use these lists for inspiration rather than as a compliance box-ticking exercise.

ECP Platform for the Information Society – Artificial Intelligence Impact Assessment

ALTAI – The Assessment List on Trustworthy Artificial Intelligence

Conclusion

Risk management of complex systems often requires more structure and conscious consideration than simpler processes. By describing the business goals, rules, business processes, and stakeholders, you not only communicate a common understanding for all members of the project team, you also make it less likely that you will forget what’s most important and what can go wrong.

Demo
See DataRobot in Action
Request a demo
About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog