Dear Futurists,
Technology seems to be on a trajectory to change… everything.
It’s already changing the human mind, bending and stretching our consciousness in new ways. It’s also giving rise to artificial minds – extensive software systems running on vast banks of silicon.
But the biggest changes in minds surely still lie ahead. Read on.
1.) Toward a global brain?
There’s good reason to be concerned about unintended consequences of software that has increasing power.
The phrase “power tends to corrupt, absolute power corrupts absolutely” applies to software as well as to humans. Rather, it applies to corporations or organisations that deploy software with ever greater power.
That software can gather together all kinds of information about each of us, as potential customers, potential voters, and potential partners in all kinds of activity. Armed with this knowledge, that software can identify the best time, and the best way, to interact with us, to send us down a different mental trajectory from the one we were previously following. Done well, the result is that we hardly notice any change. “Of course I wanted to buy that item”, we say to ourselves. “Of course I never liked such-and-such a group of outsiders”. “Of course I wanted to spend time with this person.” And so on.
We already know the damage that networks of present-day bots can do to social cohesion. Imagine the damage when these bots become more savvy. Imagine also what terrorist groups could accomplish with weaponry operated by advanced AI.
Accordingly, the prospect of the emergence of superintelligence – a step-up in capabilities of artificial intelligence networks – raises deep concerns. Futurist communities, understandably, are trying hard to anticipate the form such a superintelligence might take, and to find ways to avoid any such entity being deployed for dangerous purposes.
But what if the forthcoming global superintelligence that we should take steps to anticipate and steer is no mere collection of advanced AI algorithms, but is instead an emergent combination of out-of-control corporations, fundamentalist organisations, ambitious humans with drugged-up super-clocked cognition, IoT sensors everywhere, and, yes, advanced AI algorithms?
In that case, what we have to consider is a lot more than possible constraints on the AI itself. Just looking at a narrow definition of “AI safety” could leave us seriously blind-sided.
Even if such an emergent combination can be steered for beneficial outcomes, many other questions arise. For example, what kind of consciousness might be ascribed to the entirety of that system – or to individual parts of it?
Given that many aspects of consciousness remain mysterious, we should be careful about ruling out too many possibilities in advance.
We might find that any entity that displays superintelligence will also display superconsciousness. The kind of inner experiences of such a system may pass beyond our own present comprehension. However, if – as writers such as Ray Kurzweil predict – the destiny of humanity is to merge with technological superintelligence, we may find ourselves sharing in these same exalted new inner experiences.
What would that mean? Is that a future we should welcome, or shun? Or is it a mirage, unworthy of serious consideration?
These are some of the considerations that may well come to the fore this Saturday, in the London Futurists webinar discussion featuring Professor Susan Schneider, the director of the recently inaugurated Center for the Future Mind.
The title that Professor Schneider has given for this event is “The Global Brain Argument”. If you join the Zoom audience, you’ll be able to help shape that conversation, by the comments you place in the text chat window, and the questions you raise (or vote for) in the Q&A window.
For more details of Saturday’s event, and to register to attend, click here.
2. ) The Singularity Principles
I’ve been continuing to record videos on the theme of superintelligence, and the prospects for what has been called “The Technological Singularity” – an overwhelming disruption of the human condition by that superintelligence.
The latest addition to the set of videos is this one, “The Singularity Principles”:
These principles are a set of recommendations that are intended to increase the likelihood that oncoming disruptive technological changes, including artificial superintelligence, will have outcomes that are profoundly positive for humanity, rather than deeply detrimental.
These principles draw together insights from my time inside the technology sector, as well as what I have distilled from numerous London Futurists events and discussions over the years.
In case you’d prefer a gentler introduction to that topic, this video provides some context. You might choose to watch it first:
Or you might want to step back further. Before taking a look at the Singularity Principles, there’s the general question of why the Singularity needs to be take seriously. In that case, I suggest you look at “The Singularitarian Stance”:
If anyone wants to discuss any aspects of these videos, or the broader Vital Syllabus project, let me point you at the London Futurists Slack application, where you’ll find an “Education” channel.
If you need help to access that Slack, here are the instructions.
3.) Anticipating new kinds of democracy
Technology seems to be on a trajectory to change… everything… except, perhaps, democracy.
Earlier today, my inbox received a note from veteran journalist Gavin Esler:
During my years as a journalist, I have witnessed scandals arising from a politician’s incompetence, immorality or corruption. Predictably, these events generated a good deal of noise in the media and often vicious reactions from the public directed at individual politicians and their failings. But none prompted much serious debate about the constitutional arrangements that underpin our political system.
Why when our lives are being so transformed by the digital revolution, the green energy revolution, the genetic revolution, and others, are we so reluctant to even consider revolutionising our democracy? Is a political system created in the era of the horse and cart truly fit for purpose for an advanced democracy in the information age?
The note went on to my draw attention to an event that Esler is hosting from 6pm UK time on Wednesday 15th September:
Wednesday 15 September is United Nations International Day of Democracy. To mark the occasion, I will be hosting Open Britain’s Democracy Day Debate 2021 with a panel of experts, all of whom are, one way or another, fighting to protect our democracy…
Open Britain believes we need something akin to a democratic revolution if we are to make our system work for everyone. In this Democracy Day Debate, we will highlight where UK democracy is weak and failing, what we need to do to protect it, and how we might renew it to meet the needs and expectations of a 21st century electorate.
Given my own tentative interest in the London mayoral elections scheduled for 2024, I thought I should pay closer attention.
I have RSVP’ed to attend (virtually) the Democracy Day Debate, and you may wish to consider doing so too. Here’s a link to the event page.
One thing seems clear. If our political system remains in its present state of frequently being dysfunctional, society will find it harder to respond in constructive ways to other challenges – challenges such as people with new kinds of minds, or the potential emergence of various kinds of artificial superintelligence.
// David W. Wood
Chair, London Futurists