AI-Enabling Analysis, Interpretation, and Insight Delivery for Mission Success
As a trusted partner, DataRobot is helping the IC ensure that its teams can leverage AI effectively and efficiently. We do this by closing the gap between data collection and decisions and by enhancing the ability to provide needed data interpretation to decision-makers.
Provide speed advantage to intelligence analysis through AI-enhanced data intake and pattern recognition.
Curating intelligence is one of the biggest challenges facing the intelligence community. With dozens of sources ranging from human intelligence to signals and open source intelligence, it’s labor-intensive to sift through the noisy data from thousands of reports to determine which selected data requires more scrutiny or potential action by a decision maker. With AI, the intelligence community can make this workload more efficient, including leveraging advanced surveillance capabilities, such as targeting through AI-enabled geospatial analysis. With AI-augmented intelligence, skilled analysts can focus more time on understanding the most actionable data, events, and insights that will drive decision-making.
Foster AI fluency across the workforce through AI education and enablement.
With an ongoing shortage of data science talent, it’s not practical or possible to train a large number of people on how to manually program AI in order to use AI more actively across the intelligence community. However, by leveraging automated enterprise AI platforms such as DataRobot and associated enablement, intelligence agencies can change their culture to become more AI-driven and can readily upskill the existing workforce so that everyone understands how AI can enhance their mission. DataRobot delivers a broad, relentlessly practical AI program to help cultivate “AI in the DNA” across an entire organization, equipping the whole workforce to identify opportunities and deliver mission impact through AI.
Identify and protect against cybersecurity threats across networks and improve availability of threat intelligence.
A 2019 assessment from the DNI reported that today we face a “perfect storm” of information technology (IT) vulnerabilities. These weaknesses are associated with the proliferation of software and network technologies that threaten our entire country across U.S. government agencies, academic and research institutions, and the commercial sector. With DataRobot, agencies can quickly analyze huge volumes of network and open source data to identify advanced persistent threats, predict the presence of malware on the system, identify network threats, and help enhance our cyberdefenses to protect the security of our intelligence systems and networks.
Protect trust in analysis and decision criteria by ensuring transparency and guardrails for AI.
When AI algorithms are locked in a “black box,” preventing humans from understanding how the findings were reached, it can be difficult to trust the technology, because human experts are unable to explain the AI’s findings. With DataRobot’s Explainable AI, the AI technology is as transparent as possible, giving the algorithm or model owners in the intelligence community access to the AI’s underlying decision-making processes. This enables them to readily understand how AI reached its findings and empowers them to make adjustments as needed. In addition, since most AI takes place in evolving systems with fuzzy data, models will be tasked with scoring unusual, anomalous, or unexpected data at some point after they are deployed. Because of this, intelligence agencies need to monitor for data drift and also take action in real time to protect the integrity of model predictions. DataRobot’s Humble AI allows you to define a set of rules for any of your deployed models that can be synchronously applied at prediction time. Each rule includes a condition to be checked at prediction time and an assigned action to happen if the condition is met. With Humble AI, agencies get real-time analysis and protection for predictions generated by any of your deployed models, which makes deployed models more trustworthy.