Applying Visual AI to Legacy Security Systems
Security inspections are part of modern life. For example, approximately 2.9 million passengers fly in and out of U.S. airports every day according to the Federal Aviation Administration (FAA). More than 150 million Americans attend professional sporting events every year. At events like these, effective and efficient screening of attendees for weapons and contraband must occur to keep individuals safe while providing a high level of service.
Artificial intelligence (AI) can accelerate inspections by automating some reviews and prioritizing others, and unlike humans at the end of a long shift, an AI’s performance does not degrade over time. This blog post will demonstrate how the DataRobot team applied DataRobot’s Visual AI and AutoML capabilities to rapidly build models capable of detecting firearms in bags using open-source databases of X-ray security scans.
Dataset and Modeling Process
The training dataset used to train the AI model contains approximately 5,000 X-ray security images. Of the total dataset, approximately 30% of the images include a firearm. Of note, DataRobot can build both multilabel and multiclass (e.g. identifying multiple objects in an X-ray) predictions. For this example, we only use binary classification—does this bag contain a firearm or not?
There is variability in the images used to train the models because they were taken using three different types of security X-ray machines. This variability takes the form of resolution levels and background noise in the images. Although this degrades final model performance, DataRobot overcomes this obstacle and still creates high performing models by automatically applying industry best practices through modeling blueprints.
Another obstacle to creating high performing computer vision models is that training datasets may not contain sufficient images of the target object with different backgrounds and from different directions. This data deficiency can cause the model to fail to recognize the target object (e.g., firearms) when scoring new images. DataRobot’s Visual AI provides an easy way to overcome this object with automated image augmentation. Image augmentation flips, rotates, and scales images to increase the number of observations for each object in the training dataset and increases the probability that the model correctly identifies objects when scoring new records.
Auto-generated activation maps improve explainability by illustrating which areas of an image are most important for a model’s predictions (similar to feature impact on other models). DataRobot’s AutoML automatically builds and compares hundreds of model blueprints to find the best performing model for identifying firearms. In this example, the winning blueprint was a neural network classifier that was built without a requirement for expensive processors like GPUs.
After only a few hours, DataRobot trained and validated a model that is about 90% accurate at identifying images containing firearms. With additional tuning, this model performance can still be significantly increased. For example, an organization seeking to minimize false negatives (e.g., failing to identify firearms in X-rays) can change prediction thresholds to optimize for this criterion.
DataRobot’s combination of capabilities allow users to build and deploy a high-performing visual AI objection detection model in only a few hours with no coding. This model can be quickly improved with additional advanced tuning and deployed to cloud-connected or edge environments. Applying DataRobot to this problem does not require new security scan machines and shows how organizations can apply advanced Visual AI capabilities to existing infrastructure for rapid security improvements. Contact a member of the DataRobot team to learn more and see how your organization can become AI-driven.