How to Achieve Consistent Quality in AI

March 29, 2019
by
· 5 min read

AI has tremendous potential for benefiting humanity in every area of how we live and work. While most people realize this fact, their hopes for AI also come with a note of caution. A recent survey reported that 77% of Americans expressed that AI would have a “very positive” or “mostly positive” impact on how people work and live in the next 10 years. Another public opinion poll reported that an overwhelming majority of Americans (82%) believed that AI should be carefully managed. With such conflicting viewpoints on AI, organizations will need to manage the quality of the AIs they build. And while AI technology may be new, history shows us that automation and standardization are the most successful path towards achieving consistent quality.

The 15th Century

Before the invention of the printing press, books were handwritten. Monasteries had rooms called scriptoria where monks would copy manuscripts, painstakingly drawing, writing, and copying pages of existing books. Later, as the first universities emerged, a new type of scribe would carry out the same process in scriptoria that were located within universities. Since the process of making a book was artisanal, the quality of books varied from beautiful works of art to pages containing errors. Then, the introduction of the printing press revolutionized the availability of reliable information. By 1500, printing presses in operation throughout Western Europe had already produced more than twenty million volumes.

“A printed book, unlike a handwritten manuscript, was a standardized product, the same in its thousands of copies. It was possible for publishers to solicit corrections and contributions from readers who, from their own experience, would send back a report—and this was common practice.” Eric J. Leed, Op. Cit.

The 20th Century

In the early 20th Century, every motor vehicle was hand-made by artisans. In 1910, it was estimated that there were only 130,000 automobiles in the USA. At this time, Henry Ford was still building his cars like every other automaker did: one at a time, each built by an artisan. But after he introduced the moving assembly line in 1913, the company’s productivity soared. Ford revolutionized the production of automobiles, building its millionth car on 10th December 1915.

In the second half of the 20th Century, the productivity gains from standardization and automation were extended from manufacturing to other industries, including administrative and service tasks. In the 1970s and 1980s, companies improved their processes with total quality management. In the 1990s, they attempted to radically advance them through business process reengineering.

“…to improve the quality and efficiency of service, companies must apply the kind of technocratic thinking which in other fields has replaced the high-cost and erratic elegance of the artisan with the low-cost, predictable munificence of the manufacturer.” — Theodore Levitt, professor emeritus at Harvard Business School, Production-Line Approach to Service, Harvard Business Review

Today

Today, just about every organization wants to adopt artificial intelligence (AI) and machine learning to deliver insights and predictions from the massive amounts of business and operations data they have collected. Creating the machine learning models that power most modern AIs involves various activities such as feature (variable) selection and engineering, data preparation, selection of algorithms, and evaluation and comparison of results. Until recently, the construction of machine learning models was an arcane and artisanal task, carried out by a small pool of specialist data scientists.

According to a KDD Nuggets survey, 60% of data scientists have less than two years of experience. Less than 10% of data scientists have several years of experience or more. With such varying levels of experience comes inconsistent practices and varying quality. It’s no wonder that Gartner predicts that through 2022, 85 percent of AI projects will deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them.

As enterprises evolve to democratize data science, the risk of human error increases further. Modern software tools have given citizen data scientists access to start building predictive models. However, despite the unprecedented ease and speed of getting started with machine learning, the danger is that there are still many best practices that users need to apply to get reliable results. Most machine learning solutions are artisanal, requiring the user to have the knowledge and experience to manually apply best practices. They do not contain best practices or safeguards to protect novice talent from themselves.

In the past 12 months, AI has become big news, and not always for the right reasons. We’ve seen stories of unfair bias, the first self-driving car fatality, tech companies providing investor warnings about AI reputational risk, and new regulations introduced to protect consumers and give them the right to an explanation for an algorithmic decision.

EY warns that your AI “can malfunction, be deliberately corrupted, and acquire (and codify) human biases in ways that may or may not be immediately obvious. These failures have profound ramifications for security, decision-making and credibility, and may lead to costly litigation, reputational damage, customer revolt, reduced profitability and regulatory scrutiny.” 

With unprecedented business opportunities available, how do you manage the reputational and financial risk arising from machine learning and AI? The solution is to adopt the same strategy that revolutionized manufacturing–replace the artisanal construction of machine learning models with a model factory, a production line that builds machine learning models, using best-practices and guardrails to achieve consistent quality, and reliably producing AI you can trust.

Model factories are built upon automated machine learning, a technology invented by DataRobot. Automated machine learning automates many of the tasks needed to develop artificial intelligence (AI) and machine learning applications. Just as modern manufacturers have automated factories that build millions of items with consistent quality, model factories manufacture new machine learning models at high volumes and with consistent quality.

Incorporating the knowledge and expertise of some of the world’s top data scientists, DataRobot enables more users across an enterprise to succeed with machine learning by simply utilizing their understanding of their data and business and letting DataRobot do the rest. Customers in industries such as healthcare, banking, manufacturing, retail, and information technology confidently build machine learning projects that take advantage of DataRobot’s accuracy. Jason Mintz, VP of Product at DemystData, expressed his experience with DataRobot in a succinct and simple way.

“Life before DataRobot was long, slow, and painful.”

Finding the right machine learning model for your data challenge is easy with DataRobot. Simply upload a dataset, pick the data field you are trying to predict, and hit the Start button. DataRobot chooses dozens of appropriate machine learning models and quickly runs a competition, displaying the top models on a leaderboard. DataRobot also provides guardrails to ensure proper data science procedures are followed.

But, a DataRobot model is not a black box – the latest generation of AI provides human-friendly explanations and visualizations into how the model works and why it is making its decisions. By democratizing data science and making AI decisions accessible and explainable to business managers, DataRobot is helping organizations to further reduce risks by ensuring that their AIs are consistent with business rules, customer needs, and regulations, thereby preventing accidental errors.

Conclusion

Are you currently building AIs artisanally? Are your current data science tools and processes slow, producing variable quality results? If so, then it’s time to upgrade to a model factory using automated machine learning to build the latest generation of human-friendly machine learning models at volume and with consistent quality. Click here to arrange for a demonstration of DataRobot’s automated machine learning for AI you can trust.

New call-to-action

About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog