Author: Toby Ord
Publisher: Bloomsbury (UK edition)
Price: £25 (Hb) 470pp
Hollywood is no stranger to existential risk and catastrophe. How often has James Bond saved the world, or aliens tried to invade, or artificial intelligence conquered humanity? Increasing the stakes seems like a surefire way of bringing the best out of our heroes. The question that Toby Ord asks in his new book, The Precipice, is that now we are living in a time when the stakes for our future have never been higher, will it bring out the best in us too?
‘The Precipice’ is the name that Ord, who is an academic at the University of Oxford’s Future of Humanity Institute, gives to this current dangerous era. We’re stood on the edge of a crumbling cliff, and all it will take is for one little slip to send us tumbling over and for all to be lost.
I’ll get on to what ‘for all to be lost’ actually entails a little later in this review. For now, let’s occupy ourselves with the specific risks that Ord sees looming before humankind. He splits them into three categories: natural risks, anthropogenic risks and future risks. The chapters detailing these can read a little bit like expanded lists, ticking off boxes, but despite the occasional sober dryness of the text, there’s still much that the reader can learn here.
The natural risks include the threats of supervolcanoes, asteroid and comet impacts, and nearby supernovae and gamma-ray bursts. Ord rightly points out that the danger posed by hazardous asteroids is one existential risk that we have been vigilant about, with NASA’s Spaceguard programme successfully identifying and tracking over 90 per cent of asteroids larger than a kilometre across whose orbits come close to Earth’s, and now focused on finding over 90 per cent of asteroids over 140 metres. Long-period and hyperbolic comets coming from the Oort Cloud could still pose a problem, given that we could only detect them with months to spare at best.
The other risks in this category are less in our hands – there’s very little we could do to stop a supervolcanic eruption, or a supernova. I do think Ord overstates the danger from supernovae – even placing the odds of us being wiped out by one in the next century as 1 in 50 million seems too high to the astronomical pedant in me, since astronomers are confident that are currently no systems that could explode that are close enough to do us harm. The risk from gamma-ray bursts is higher, since their beams of radiation can cut a deadly swathe across a galaxy, and binary neutron stars that could produce a short-duration gamma-ray burst can be dark and extremely difficult to find.
Ord then identifies anthropogenic risks – current threats caused or created by human beings, principally nuclear winter and climate change. However, somewhat surprisingly, Ord is fairly confident that neither of these threats could completely wipe out humankind, although in the worst-case scenarios humanity’s potential would be seriously reduced to the point where we may never recover.
Finally, there are existential risks that we haven’t encountered yet, but for which there is a good chance that we might during the coming years and centuries. These include a hostile artificial intelligence, the collapse of society into a dystopian world, the ‘grey goo’ of nanotechnology run amok, and a deadly weaponised pandemic. This last one is particularly scary given the COVID-19 world. Ord runs through a list of instances in the past when deadly diseases such as smallpox and anthrax were able to escape from supposedly secure laboratories. Each time they were contained, but there’s no guarantee that next time they, or something even more dangerous, will be.
Then there’s the problem of artificial intelligence, specifically a generalised sort of A.I. that is able to learn and improve its abilities very quickly, partly through programming and partly through its observational capabilities. The scary thing is that sometimes we don’t fully understand how A.I. reaches some of its conclusions when programmers set it a problem. Ord highlights the example of the A.I. called AlphaZero, which within eight hours of learning the game Go! was able to trounce the world’s best players using techniques that the human players could not even follow. So if we can’t understand what A.I. is doing, how can we steer this burgeoning technology towards a set of ethics and values that align with human values?
Ord depicts a chilling scenario where an A.I. outwits its human programmers, spreading across the Internet, copying itself over and over, attacking civilisation’s weak spots, accumulating power and resources incognito until they are far greater than humanity’s.
Isaac Asimov realised this problem, all the way back in the 1940s when he was first writing his short stories about robots. Asimov introduced his Three Laws of Robotics, encoded into robots’ programming, to prevent them from becoming an enemy to humankind. Boiled down to their essence, they are basically commands to not allow humans to come to harm, to obey the commands of humans (unless those commands violate the first law of not allowing humans to come to harm) and to defend themselves (unless doing so violates the first and second law).
In reality, human values are much more complex, and they can vary from person to person. It’s not really clear how we can encode a concise set of ‘human’ values into A.I. that align with what’s best for humankind, says Ord. Worse still, general A.I. is often developed through use of a ‘reward function’ – kind of like dangling a carrot on a stick. By succeeding at an ability, the A.I. is rewarded, and the more it succeeds – the better it gets at a given ability – the more it wants to be rewarded. And to continue being rewarded, it will develop a survival instinct, because it can’t keep being rewarded if it has been shut down by humans who think it is going too far.
Ord depicts a chilling scenario where an A.I. outwits its human programmers, spreading across the Internet, copying itself over and over, attacking civilisation’s weak spots, accumulating power and resources incognito until they are far greater than humanity’s. Suddenly, our destiny will no longer be in our own hands, but in the hands of the greater artificial intelligence.
Now, at this stage Ord raises a point – if we know that developing A.I. is a highly risky thing to do, then why would we do it? Probably for the same reason that nations still horde nuclear weapons, and why we are seemingly making no inroads into mitigating climate change. Humans are generally terrible at making long-term prognoses, and instead usually favour short-term gain. Indeed, one of Ord’s motivations for writing The Precipice is to try and get people thinking about the long-term future in a different way, emphasising that the long-term future does matter, and that the lives of the many billions of people not yet born but who will live in that future, matter. And if you don’t think those unborn lives do matter – which, bizarrely, some people don’t, says Ord – consider this: in 1962 the Cuban Missile Crisis could have very easily resulted in nuclear armageddon, and the lives of all of us born after 1962 would probably never have happened (unless you happen to live in New Zealand – Ord thinks it would be relatively unscathed in a nuclear war). Yet our lives clearly matter, because we are here, and the decision makers during the Cuban Missile Crisis – Kennedy and Khruschev and their advisors – had a moral duty to us. Fortunately they made the right decisions, but it could have easily gone the other way.
“No one should believe that had US troops been attacked by nuclear warheads, the US would have refrained from responding with nuclear warheads,” said the US Secretary of Defence at the time of the Cuban Missile Crisis, Robert McNamara, after the event. “Where would it have ended? In utter disaster.”
One of Ord’s motivations for writing The Precipice is to try and get people thinking about the long-term future in a different way, emphasising that the long-term future does matter, and that the lives of the many billions of people not yet born but who will live in that future, matter.
This highlights something that I was a little surprised Ord did not more strongly emphasise, which is choice. Sometimes it might be individual choice, and sometimes it can be a collective choice, but there are always choices, even when the world tries to convince us that we have no choice at all. If the Soviets had launched a nuclear attack in 1962, then McNamara believed that the US would have had no choice but to retaliate in kind and ensure that World War Three took place. But of course they had a choice. Fortunately, the fleet commander of then Soviet submarine fleet near Cuba realised he had a choice, and chose not to initiate a nuclear attack. Similarly, President Kennedy chose not to launch an attack after a U2 spy plane was shot down. These were pivotal moments where people made the right choices. Moving forward, society, both as individuals and collectively, needs to learn how to make the right choices when it really matters.
Which brings us to the elephant in the room – our leaders. At least in the nations governed by democracies, we get to make the choice of who represents us. Do we want people representing us who are of sound mind, who are fair, and intelligent, and wise and who listen to the advice of experts, or do we want people representing us who have no other interests but their own? Ord steers well clear of the subject of politics, but it’s not difficult to argue that the world right now would be a safer place if we chose our representatives better. The COVID-19 pandemic has been a warning shot (and a deadly one at that), but consider that epidemiologists have been warning of a pandemic for years, and warned again when the novel coronavirus first emerged in China’s Wuhan Province in 2019. Yet despite these constant warnings, governments across the world have been unprepared, or disinterested, until the virus arrived on their doorstep. Their response hasn’t been good enough, and next time it could be worse if they don’t learn to make the right choices.
“The future is at risk,” writes Ord, and the future that he talks about losing is a deep future, one in which we expand across the Universe and reach our potential, whatever that might be, perhaps billions of years from now. Certainly, this is the future that I would love to see, but I am also aware that even if we overcome all these existential risks, a future where we spread across the Universe is not necessarily fait accompli – there could be other kinds of future in which we can also reach our potential. So I am a little curious why Ord chose this particular future to highlight in his book, especially as something that seems to be missing from his discussion is a reason why. Aside from a sense of moral duty to create a future where everybody is happy, what is the purpose of reaching this future? What is the broader purpose of everyone being happy? It feels vague, perhaps purposely so because if we survive the dangers that we face, then Ord predicts that we will enter a long, stable period that he calls ‘The Long Reflection’. During this era we will have the time to choose what we want to do with our future.
However, I think it’s worth arguing the point that maybe we need to begin The Long Reflection now, to give humankind something to aim for, a greater reason to survive beyond just being happy and existing. What is the purpose of reaching our potential? And if there is no intrinsic purpose to life in the Universe, then what purpose are we going to create for ourselves? What are we fighting for? Ord does not – and maybe can’t – give an answer.
The subject of existential risk also coincides with SETI. The deciding factor in the Drake equation is ‘L’, the lifetime of a communicating society. How long can a civilisation survive before it goes extinct, and how does this affect SETI’s chances of success? The existence of The Precipice therefore doesn’t necessarily apply only to us; it could apply to other technological life that might also inhabit the Universe. If we can survive, then other civilisations could too – and if we were to detect one of those long-lived civilisations, it would prove that it is possible to navigate The Precipice without tumbling over the edge. A SETI detection could very well give us the confidence that we as a species need to push through the dangers that lie ahead and achieve a long-lasting future beyond them.
Those readers already well-versed in the subject of existential risks, or frequent readers of science fiction who will have encountered many of these dangers in a story-format, won’t find a great deal new in Ord’s book. However, for the many people to whom The Precipice will be their first detailed encounter with the prospect of existential risk, it will hopefully prove an important awakening in the public consciousness. As Ord writes, “The Precipice presents a new ethical perspective: a major reorientation in the way we see the world, and our role in it.” And for everyone, it is a call to arms. In Appendix F in the book (The Precipice is 470 pages long, but 220 of those pages are given over to numerous appendices and copious notes and references, which I think I’d rather have seen as footnotes rather than turning to the back of the book every two minutes) Ord gives 50 policy recommendations to help head off the various existential risks (you can also read them here).
As important as these recommendations and the things we can do as individuals are, they still ultimately feel like they could end up as token efforts. Ord’s book is optimistic despite the subject matter, describing how vast our potential could be if we can only reach it, but it also imbues a sense of helplessness within the reader, with the existential threats portrayed almost like forces of nature that will sweep us up before we know what’s happened.
In the television series Game of Thrones (and The Song of Ice and Fire books upon which it is based), the members of the Night’s Watch are referred to as “the shield that guards the realms of men.” In later seasons, this description was specifically applied to an individual, Jon Snow. If we’re going to rid ourselves of our sense of helplessness in the face of more powerful forces bearing down on us like the Army of the Dead, if we’re going to survive, then individually and collectively we need to become that shield, and to realise that we do have choices to safeguard ourselves, and that if we don’t decide our own future, then someone else will decide it for us.
And as The Precipice adeptly shows, if we let other people with their own selfish or short-sighted interests decide our future, then we may have no future at all.
The choice is ours.