In the 1970s, Stephen Hawking, who died on Wednesday age 76, turned the physics world upside down when he announced that black holes aren’t so black after all, and that some light can in fact escape the singularity’s edge, called the event horizon.
That bombshell, which inspired a whole new way of looking at black holes through a quantum lens, would certainly not be the last time Hawking made shocking pronouncements about the nature of the cosmos.
Here, we revisit some of the most famous wagers and provocative statements that Hawking made during his more than 40 years of public life.
Decades of Black Hole Bets
To those familiar with Hawking’s work, which focused on the mysteries of black holes, it may be surprising that Hawking once bet against their existence. But the mischievous cosmologist had a long history of high-profile scientific wagers—many of which he has lost.
On December 10, 1974, Hawking made a bet with Caltech theoretical physicist Kip Thorne over whether Cygnus X-1, a massive x-ray source in our galaxy, was a black hole. Both were fairly certain it was. But when push came to shove, Hawking bet against Cygnus X-1.
“This was a form of insurance policy for me. I have done a lot of work on black holes, and it would all be wasted if it turned out that black holes do not exist,” Hawking wrote in his 1988 book A Brief History of Time. “But in that case, I would have the consolation of winning my bet, which would win me four years of the magazine Private Eye.”
Years later, Hawking entered another black hole-related bet with Thorne and Caltech theoretical physicist John Preskill. In 1997, the trio wagered over whether a black hole destroys the information encoded in the objects it gravitationally devours. Thorne and Hawking bet that black holes do in fact destroy information—seemingly breaking a tenet of quantum mechanics. Preskill disagreed.
Black holes weren’t the only targets of Hawking’s scientific gambles. In 2012, scientists at CERN’s Large Hadron Collider made history when they discovered hints of the Higgs boson—the long-sought missing piece of the standard model of particle physics. (Read National Geographic’s coverage of the 2012 announcement.)
Theorized in the 1960s, the Higgs boson is the particle that interacts with most other subatomic particles to endow them mass. But for decades, the Higgs proved fiendishly difficult to find—so much so that Hawking had a running bet with the University of Michigan’s Gordon Kane over whether the particle would ever be discovered.
“About a decade ago, I was in a conference in Korea, and Stephen was there,” Kane said in a 2012 interview with NPR. “And Stephen said, I’ll bet you that there is no Higgs boson. So, I immediately said, I’ll take that bet. Then when we arranged the details a little bit and settled on $100.”
Does Alien Life Pose a Danger?
In his later years, Hawking repeatedly warned about the dangers of humankind meeting alien civilizations. In his 2010 documentary series Into the Universe with Stephen Hawking, he suggested that alien civilizations sufficiently advanced to visit Earth may be hostile.
“Such advanced aliens would perhaps become nomads, looking to conquer and colonize whatever planets they could reach,” he said. “Who knows what the limits would be?” And in the 2016 documentary Stephen Hawking’s Favorite Places, Hawking reiterated his views: “Meeting an advanced civilization could be like Native Americans encountering Columbus. That didn’t turn out so well.”
“Any society with the capability to threaten Earth is overwhelmingly likely to already have the kit required to pick up the leakage we’ve been wafting skyward for seven decades,” Seth Shostak, a senior astronomer at the SETI Institute, wrote in a 2016 opinion piece for the Guardian. “And since we’ve been busy for a lifetime filling the seas of space with bottled messages marking our existence and position, it’s a bit silly to fret about new bottles.”
Artificial Intelligence: Godsend, or Doom?
Hawking also voiced alarm over the potential power—and downsides of—widespread artificial intelligence (AI), which he feared could pose an existential threat to humanity.
“The development of full artificial intelligence could spell the end of the human race,” he said in a 2014 interview with the BBC. “It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Hawking was not alone airing concerns over AI research. Elon Musk, the celebrity CEO of Tesla and SpaceX, has publicly deemed AI an “existential threat.” But these public comments have hardly gone unchallenged, with some deriding them as “scare tactics.”
In later comments, Hawking emphasized that AI wouldn’t necessarily be all bad: “The potential benefits of creating intelligence are huge,” he said in a 2016 speech reported on by the Guardian. “Every aspect of our lives will be transformed.” But in his first Ask Me Anything on Reddit, Hawking voiced a warning about who would benefit from such technological advances.
“If machines produce everything we need, the outcome will depend on how things are distributed,” he said. “So far, the trend seems to be toward … technology driving ever-increasing inequality.”
This content was originally published here.