Dear Futurists,
A major complication in preparing for the next few years is that there are so many hard-to-predict factors that could quickly surge in importance – each one of which could overturn our previous assumptions about how human experience is likely to develop.
One response to this largescale uncertainty is to try to divide and conquer. That is, to separate out different drivers of change and understand them in isolation. And then to consider what happens when two or more of these drivers apply at the same time.
With two different axes of change in play, the result, broadly speaking, is four different scenarios on a 2×2 grid. Taking each of the resulting four scenarios seriously can be a great workout for our imagination and understanding. It’s like going to the foresight gym: we stress muscles we didn’t even know could be stressed, and feel a bit sore for a while. But, all being well, we end up stronger and fitter – better prepared for future shocks.
That’s an exercise that some of us will be undertaking on Monday 15th December, in a pre-Christmas gathering in Fleet Street’s venerable Ye Olde Cock Tavern. For more details of this event and other futurist news, read on.
1.) Beyond foresight-as-usual (15th Dec)
In the years leading up to 2030, the drivers of change may be more pressing and more disruptive than in previous decades. But many of the classic techniques of foresight remain as relevant as ever.
We’re fortunate that on Monday 15th December, we’ll be joined by Rohit Talwar, Global Futurist and CEO of Fast Future, to guide us through a joint interactive exercise to explore scenarios for 2030.
As a starting point, we’ll be considering two primary axes of uncertainty:
- AI Adoption: AI offering tools that benefit all humanity, vs. AI increasingly controlling lives, businesses, governments, and the economy
- Geopolitical landscape: Increasing cohesion with stronger global institutions, vs. Multipolar fragmentation and failing institutions
Earlier today, I asked ChatGPT to produce an image depicting the resulting four scenarios. Here’s what it produced:
(You may wonder what various aspects of this image are meant to imply. There’s room for different answers…)
Beyond these primary axes, there are lots of other variables that could be added into the mix in the run-up to 2030:
- China-ascendant vs. US-dominant
- India changes everything vs. Europe changes everything
- Religions revive vs. secularism rises
- Universal Generous Income vs. unprecedented inequality
- Cryptocurrencies take over vs. Cryptobubble deflates
- Climate chaos vs. transition to green economy
- Human relationships wither vs. Humanity rediscovers human uniqueness
- “1984” (frightened into acquiescence) vs. “Brave New World” (distracted into acquiescence)
- Democracy vs. Oligarchy
- Cyber-hackers uncontrollable vs. Privacy and security preserved
You will likely judge some of these potential disruptions as more profound, and others as less significant (or as “over hyped”).
Either way, I anticipate that a lively real-time exchange of views on the possible interplay of these change drivers will enrich our imagination and strengthen our foresight muscles. It may also help us to commit to wiser action to alter the trajectory between today and 2030.
Alongside the “serious” discussion, expect lots of informal networking at the event – even some merry-making.
To register to take part on the 15th:
- Visit this Meetup Page and RSVP Yes
- Important: note the requirement on that page to select your pre-Christmas meal in advance
*** Due to the venue being busy in the run-up to Christmas, anyone who fails to submit their order ahead of 8pm on Monday 8th may miss out on any meal on the 15th (or may have to wait a long time for their order to be served). ***
2.) Global Catastrophic Risks Report 2026
Earlier on the same day – Monday 15th December – between 2pm and 3pm UK time – there will be an opportunity to join the online launch of the Global Catastrophic Risks 2026 Report.
To quote from the announcement of this report by the Global Challenges Foundation:
Global catastrophic risks are jeopardizing life on Earth. Improved global governance is needed to secure our future.
The Global Catastrophic Risks 2026 report looks at five of the biggest risks facing humanity today — climate change, biodiversity collapse, weapons of mass destruction, artificial intelligence (AI) in military decision-making and near Earth asteroids.
Some of the world’s leading experts outline their assessments of these risks in their respective fields throughout the report. All of them call for stronger governance and a more systematic approach across institutions and risk domains.
The launch will feature Johan Rockström, Manjana Milkoreit, David Obura, Denise Garcia, Wilfred Wan, and Romana Kofler, and will bemoderated by Heba Aly.
To register to attend this online launch visit https://bit.ly/GCR2026.
3.) Statement on Superintelligence passes 128k signatures
On 22nd October, the Future of Life Institute released a very short Statement on Superintelligence:
We call for a prohibition on the development of superintelligence, not lifted before there is
1. broad scientific consensus that it will be done safely and controllably, and
2. strong public buy-in
Less than seven weeks later, over 128,000 people have signed – researchers, scientists, policymakers, faith leaders, and people from business, the arts, media, AI companies, and many other walks of life.
This is by no means a negative statement: the preamble sets the context as follows, “Innovative AI tools may bring unprecedented health and prosperity”.
Despite the steady flow of new signatures, some critics have raised a number of reasons for not signing the letter:
- Superintelligence lacks a clear definition, they claim – despite this description from the preamble, “superintelligence… can significantly outperform all humans on essentially all cognitive tasks”
- Superintelligence is likely to be intrinsically safe, they claim – in which case, why not obtain “broad scientific consensus” to back up that claim?
- Scientific consensus is often perverse, they claim – in which case, we would all in be in a very perilous situation (but I personally have a much higher regard for the consensus of the scientific community)
- Public buy-in is often perverse too, they claim – but are we really saying that accelerationist big tech should be able to ride roughshod over public opposition to what they’re building?
- Humanity, they claim, is incapable of the required degree of global collaboration – despite encouraging evidence that humanity can coordinate when a stark risk is very apparent and provided there are credible pathways forward to manage the risk.
In short, I find all these objections to be far from convincing. I had no hesitation in adding my name to the set of signatories on the first day the letter was published.
I encourage you to add your signature too!
Another person who signed the letter is the guest in the most recent London Futurists Podcast episode. Which takes me to the final item in this newsletter…
4.) What’s your p(Pause)?
Many thanks to Holly Elmore, the Executive Director of Pause AI US, for sharing in the latest London Futurists Podcast episode lots of her insight about the strengths and weaknesses of the AI safety community.
Instead of the customary question “What’s your p(Doom)?”, the conversation featured the question “What’s your p(Pause)?”:
What’s the probability that humanity will collectively agree to, and follow, without any cheating, a global pause on the development of unsafe AI?
Other points covered in the conversation include:
- Holly’s personal journey from studying wild animal welfare to studying the behaviour of wild AI development companies
- Special complications of possible digital sentience
- Why the Pause campaign makes sense even if your p(Pause) is as low as 20%
- Why many in the overall AI safety community seem to carry personal motivational conflicts
- Why the community was unnecessarily slow to move from researching technical safety to adopting public advocacy
- Lessons from the venerable book “Crossing the Chasm”.
I believe you will find it well worth your time to listen to the conversation.
Two more very interesting episodes are lined up for release soon – keep your eyes open for more news.
// David W. Wood
Chair, London Futurists



