Dear Futurists,
No-one likes to contemplate the probability of global catastrophe. But that’s what I was asked to do, during a Virtual Enlightenment Salon on Sunday.
The subject of the salon was “Reducing Existential Risk from the Russia-Ukraine Conflict”. It was organised by the U.S. Transhumanist Party. You can see a recording of it here. I was one of a number of panellists.
Around 75 minutes into the discussion, the host of the event, Gennady Stolyarov, asked the panellists to each give our assessment of the probability of the outbreak of nuclear war, based on our understanding of the situation in Ukraine. To start that part of the discussion, Gennady offered his own estimate: “somewhere in the vicinity of ten percent”.
Before reading further, you might like to pause, to decide your own answer to that question.
=== [pause?] ===
When the virtual microphone reached me, I suggested that the probability of a global nuclear war this year was “less than one percent… but more than one in a thousand”. Vladimir Putin wants people to think the probability is a lot higher, I said, so that, out of fear, they would give him more of what he demands. Putin has an advantage if people think he is mad, whereas in reality he remains rationally self-interested.
I went on: Even if there were limited usage of tactical nuclear weapons in Ukraine itself, the group of people around Putin would likely intervene to stop him before any escalation to a global nuclear war. It wouldn’t be the first time a Russian or Soviet leader had been deposed by underlings.
Of course, there’s no room for complacency. Russia’s steamroller invasion of Ukraine, the widespread ensuing crimes against humanity, and Putin’s sly sabre-rattling of his nuclear arsenal, has brought back into the public mind something that most of us had stopped thinking about. Perhaps mushroom clouds will fill the skies in the months ahead, with millions of people being vaporised in an instant, and with survivors of the initial blasts around the world mainly dying a slow death in the aftermath due to a nuclear winter. It’s a bitterly unpleasant thought.
But it’s dangerous to focus exclusively on just one potential cause of global catastrophe. Yes, the nuclear arsenals of military superpowers are intrinsically unsafe, being subject to clumsy stumbles over neglected tripwires. But what raises the likelihood of actual use is the attitudes and governing mode of the people with the authority to initiate a missile launch. When a country’s governance is deeply dysfunctional, the odds increase. Likewise if the ruling regime is flushed with the apparent success of grabbing more land and resources due to nuclear intimidation. That’s the horrible danger of appeasement.
Nor should we ignore the possibility of catastrophes arising from chemical or biological weapons. Indeed, in due course, as more powerful AI systems are deployed more widely within government and military frameworks, these systems will change the risk landscape again.
How can we usefully assess all these risks, and determine the best course of action as a result? Read on.
1.) Building an existential risk observatory – Sat 26th March
Careful estimates of the likelihood of various catastrophic or existential disasters have several important outcomes:
- They clarify which risks deserve more attention (compared to others which, perhaps, are already been closely monitored)
- They highlight factors which could increase these risks – as well as others which could decrease these risks
- They suggest “canary signals” – measurements that should be checked on a regular basis, for advance warning of potential imminent (literal or metaphorical) eruptions.
Perhaps the most careful set of such estimates was included in the 2020 book by Oxford philosopher Toby Ord, The Precipice: Existential Risk and the Future of Humanity. That book is a touch too philosophical for my liking at times, but it marshals lots of analysis and data in a compelling format.
One group of people who were deeply influenced by The Precipice were the co-founders of the organisation “Existential Risk Observatory”. Their website starts as follows:
Human extinction risk has increased from almost zero to an estimated likelihood of one in six in the next hundred years, according to recent research from Oxford’s Future of Humanity Institute. We think this likelihood is unacceptably high.
We also believe that the first step towards decreasing existential risk is awareness. Therefore, the Existential Risk Observatory is committed to reducing human existential risk by informing the public debate.
This Saturday, 26th March, Otto Barten, Director of the Existential Risk Observatory, will be joining London Futurists to describe the work of the Observatory, their plans for the future, and options for collaboration.
Click here for more details of this event and to register to attend.
2.) Cambridge Conference on Catastrophic Risk, 19-21 April
A ninety minute webinar can enable a good exploration of many key points, regarding potential catastrophic or existential risks, but will by no means answer every question arising.
For a significantly deeper dive, I recommend the three-day conference running from 19th to 21st of April, “Cambridge Conference on Catastrophic Risk”.
You can find the agenda here – with each of the three days having its own theme:
- Tue 19th: Future risks, and how we can study them
- Wed 20th: Real catastrophes, and what we can learn from them
- Thu 21st: Effective global responses that manage the risks, and how we can achieve them.
Click here for more information about CCCR2022 and to register to attend.
Note: my own introduction to the subject of catastrophic risks – which I sometimes call “landmines” – is in this video:
That’s a video from the Vital Syllabus series. Which brings me to the next topic…
3.) Progress with the Vital Syllabus
A major part of the answer to preventing catastrophic risk is “better education”.
Indeed, every serious discussion about significantly improving the future tends to come round to the topic of the need to significantly improve education.
I’ve had that conversation literally hundreds of times since the formation of London Futurists in 2008.
Last year, I resolved to do something about it. Hence the Vital Syllabus project. Initial progress was fairly slow. But it is now picking up momentum.
Here’s a recording from the start of an open discussion about the project, held on Tuesday last week:
Since then, the number of videos that are included in the project has moved up to 79. Yesterday, for the first time, there was at least one video in each of the 24 top-level areas of the Syllabus.
The best way to find out more about the ongoing activity in this project, to provide feedback about the videos already included, or to suggest additions or improvements to the project, is to join the #education channel of the London Futurists Slack. Click this link to find out how to do that.
The project will gradually be formalising a core team of reviewers and administrators. If you think you might like to become part of that team, please remember that, in the world of open source: talk is cheap, but contribution is king.
4.) An alternative view of the future – and of the past
If you missed our event on last Saturday, you missed a very special exploration, by J. Storrs Hall, of how the future could have turned out very differently – and of how society’s development from around the 1970s onward took some unfortunate turns.
As you would imagine, there are some key implications for the kinds of actions we need to take in the present, if we want to avoid similar missteps in the future.
You can watch a replay of the event via this recording:
5.) Other forthcoming events of interest
This page is where you can find out more about the other events already scheduled in the London Futurists calendar.
I won’t say much about these other events for the time being, except to point out that if you’re interested in early access to some of the findings of our recent survey “The Rise and Implications of AGI” – which will be featured in our event on Saturday 2nd April – then you should click on the link for that event, where you will find in turn links to register for earlier, shorter briefings on 24th March (tomorrow!) and 31st March.
If you’re interested in progress toward the abolition of aging – the radical extension of healthy human longevity – you might like the Perpetual Life event which is taking place this Thursday (tomorrow). I will be the featured speaker, from 11pm UK time (7pm US East Coast time), and I’ll be discussing to what extent my views have changed since I published my book The Abolition of Aging in 2016. (Shockingly, that’s six years ago.)
That Perpetual Life event will be streaming here on YouTube. If you want to be more sociable, you can join a “Zoom Party” one hour earlier, and stay in that party for informal discussion long after the finish of my presentation and the ensuing Q&A.
And as a kind of taster for that discussion, I’ll also be taking part in a Twitter Space discussion from 9pm UK time today. (Note: you can listen to Twitter Space discussions via any web browser, but if you want to speak in one of them, you’ll need to attend using a Twitter app on your smartphone.)
// David W. Wood
Chair, London Futurists