Admiral James Winnefeld Jr.: The Third Horizon of Innovation
Admiral James A. “Sandy” Winnefeld Jr. joined us on the More Intelligent Tomorrow podcast to discuss the third horizon of innovation in technology and the role played by artificial intelligence.
Admiral Winnefeld began his Naval career in 1978, flying F-14 fighter jets. Over the next 37 years, he served in the air, at sea, and on land, leading numerous commands, including in the Persian Gulf, Afghanistan, and Iraq. He served as Vice Chairman of the Joint Chiefs of Staff, the nation’s second-highest ranking military officer, from 2011-2015. Now retired from the Navy, he writes frequently on subjects related to warfare and is a member of the Board of Directors for the aerospace and defense company Raytheon Technologies. He is also the co-founder of SAFE Project, dedicated to reversing the opioid addiction fatality epidemic, which has had a direct impact on his family.
The third horizon of innovation requires looking beyond current technologies and concepts to anticipate what comes next and rethinking how to approach the problem. Admiral Winnefeld points out that America’s adversaries are blending different types of warfare: conventional, political, economic, legal, and informational. “We don’t want to wake up one morning with a big surprise on our hands because someone out-innovated us with something that may have been obvious all along.”
What’s the next high-leverage weapon that terrorists or disruptors will seek? People worry about nuclear weapons, but terrorists shocked the world on 9/11 with a weapon no one had imagined: flying airplanes into buildings. There are cyber threats like the recent ransomware attack on our fuel system. If an attack on the United States took down the electrical power grid or food supply chain for a significant length of time, millions could die.
That’s where third horizon innovators come in. Their job is to develop new concepts that address dilemmas, both lethal and non-lethal, that we can present to our adversaries—as a deterrent, in a war, or in response to disruptive events. After identifying those dilemmas, third horizon innovators need to figure out what technology we don’t have—because we’ve never needed it before—that would bring that vision to life. Admiral Winnefeld points out that these innovations don’t necessarily need to be executed today. But we need to be thinking about them.
The really grand ideas, he says, could come from almost anywhere: “A junior officer on deployment who spends their watch time thinking and putting disparate concepts together—a new idea that nobody’s ever thought of before. Or it could be a university researcher. Almost anyone.”
Increasingly, these innovations involve automation and artificial intelligence. Some in the tech community believe that AI can be employed in war but will never automate a kill decision. However, there have long been systems that can automate a response to a threat. In 1988, an American Aegis missile cruise—its air defense system set to “auto special”—shot down an Iranian airliner with a tragic loss of 290 Iranian lives.
The goal isn’t to kill civilians with drones; for example, it’s to kill combatants that the drones have a high degree of confidence are recognized as lethal threats in that environment.
Admiral Winnefeld explains that the best protection against an unintended consequence is to build ethical decision-making rules into machine learning algorithms for lethal autonomous systems. This ensures that they offer—not 100 percent—but the highest possible degree of certainty that they’ll be employed responsibly and legally, and within the law of armed conflict. There should also be a way to ensure that a human can step in and stop these systems, if at all possible.
To hear more details about the third horizon of innovation and the role artificial intelligence plays in modern warfare, check out the More Intelligent Tomorrow episode. You can also listen everywhere you enjoy podcasts, including Apple Podcasts, Spotify, Stitcher, and Google Podcasts.