Dr. Hannah Fry on the Value of Mind-Blowing Mathematical Insights: More Intelligent Tomorrow, Episode #15
Dr. Hannah Fry is a mathematician, author, lecturer, BBC presenter, podcaster, and public speaker. Her TED Talk, The Mathematics of Love, has garnered over 5 million views. But before she became one of the most popular faces on BBC Four, Dr. Fry experienced a setback, which triggered a prescient insight into one of today’s most poignant tech topics—AI Ethics.
In 2014—before the rise of big data—Dr. Fry delivered her data analysis from the 2011 London Riots to a conference in Berlin. The numbers showed how data and algorithms could help the police control an entire city’s worth of people. Dr. Fry frankly admits that she had ignored Berlin’s long history of police states and that the lecture was not received well:
“This audience—they just totally tore me apart in the Q&A— it was a bloodbath. I was standing on stage to these questions, and there were people heckling. And if people are heckling in maths lectures, you know you’re really in trouble, right?”
She returned to London with a fresh insight from her “wake-up call”:
“I started looking around me at the stuff that my colleagues were doing, and there was just total wanton disregard for these really fundamental ethics problems that people were just ignoring.”
Dr. Fry states that her “own mistake” was the inspiration for her latest book, Hello World: Being Human in the Age of Algorithms.
Our Hatred of Mathematics is Not Fundamental
Dr. Fry, host of Contagion! The BBC Four Pandemic, also concedes that she is not on a “noble quest to make people love maths as much as possible,” but simply enjoys documenting “these totally mind-blowing insightful ways to look at the world that no one else is really talking about.”
She also hopes that the U.K. will become more like Singapore and South Korea and overcome its phobia of mathematics:
“Our hatred of mathematics is not fundamental. I don’t think it’s a human trait. I think it’s the way that we culturally think about it. And that is the problem rather than the subject itself.”
She also warned of the extreme views people often hold on math, especially during the start of events such as the COVID-19 pandemic:
“There is this habit of either thinking that mathematical models are on the one extreme, supposed to be this absolute perfect crystal ball that allows you to parent the future and have precise understanding of exactly what’s gonna happen—which they’re not. On the other end of the spectrum where people think that they’re just complete junk and basically educated guesswork. I don’t think they’re that either. I think they sit in the middle. And I think that when you are driving blind—as we certainly were in March—these are the only things you have.”
From Formula 1 to Deep Blue: How Math and Technology Influence Competition
Dr. Fry cites her quantitative analysis for Formula 1—and the sport’s reliance on math—as an example of math’s captivating potential:
“What you’re watching is essentially a gigantic mathematical competition. It’s what goes on behind that, behind the scenes, off the track. That’s the stuff that’s so exciting.”
Dr. Fry also details how Formula 1 teams were using an old Soviet invention, the laser microphone, in conjunction with KGB tactics to spy on the competition by observing the vibrations created by human conversation:
“You’d go outside Ferrari’s motorhome and you’d shine a little laser into a plant leaf or something inside, have your camera there, and then you’d go back and you’d work out what tactics they were gonna use all of this stuff. I mean, it’s just for your analysis, right. It’s brilliant.”
Technology helps Formula 1 cars perform amazing feats, but it can also be used to psyche out famed chess champions, such as Gary Kasparov, who was “like a tornado.” Dr. Fry points out how Kasparov’s loss to Deep Blue exposed human fallibility:
“They deliberately coded this machine so that when it finished a computation, sometimes it would just play straight away. But then there were other times where it would just pause for a random amount of time before coming up with the answer, which would mean that sometimes there’ll be some very simple positions in which, to Kasparov, it would appear as though the machine was taking a really long time to work it out. So he’s then sitting there thinking, ‘Well, what is the machine thinking?’ and so on and so on.”
Kasparov could not intimidate Deep Blue. Deep Blue won.
The Danger of Overdiagnosis: Dreams of Haptic Feedback Possibilities
Dr. Fry also uses breast cancer diagnoses to point out an example where humans still have the edge over machines. She states that many people have cancer in their bodies that would never “develop into anything that we needed to be concerned about” and that a machine, meanwhile, could be perfectly capable of diagnosing and finding every single cancer cell in the body.
“It’s still not a great thing because ultimately it would mean that you would have millions of women, for instance, who would end up having unnecessary cancer treatments, and I think that there are some things where if you switched over to an automated decision, you can end up in some ways doing more damage than good.”
Finally, Dr. Fry foresees huge potential for haptic feedback. She talks of gloves that could help with everything from physical therapy to playing piano, to changing car oil or rolling pastry.
“We have had the internet of things, and I think that the next step really is the internet of skills—that’s something that I’m very much looking forward to.”
To hear more about Dr. Hannah Fry’s invitation to honestly explore the true powers and limitations of the algorithms that already automate important decisions in nearly every aspect of modern life, check out Datarobot.com/podcast or http://datarobot.buzzsprout.com/. You can also listen everywhere you already enjoy podcasts, including Apple Podcasts, Spotify, Stitcher, and Google Podcasts.