I’ve received varying feedback about the title of the event we’re holding tomorrow.
Some of you like the title: “How might we control AI, before AI controls us?” You accept the need to apply restrictions to AI.
But others among you disagree. Control is the wrong metaphor, I’ve been told. Worse, the idea fails the basic test of humanity: it implies a willingness to enslave an emerging sentience. Shouldn’t humanity have gone past the phase of slavery by now?
To dip into these controversies, and for some news about projects likely to interest London Futurists, please read on.
1.) Limits to collaboration
Isn’t collaboration preferable to competition?
Doesn’t competition lead to a waste of resources, to unnecessary mental hostility, and to risks of runaway arms races?
I have a lot of sympathy with that viewpoint. Here’s an excerpt from an article I wrote as long ago as 2005, about the need for cooperation and openness within the mobile phone industry:
Foreseeing the risks of a closed approach to smartphone development, the mobile phone industry came together to create Symbian [in 1998]. The name “Symbian” is derived from the biological term “symbiosis”, emphasising the positive aspects of collaboration. Symbian’s motto is “cooperate before competing”.
Therefore, instead of seeking to control and constrain the actions of new AI systems, isn’t it better to cooperate with these intelligences?
That line of thinking, therefore, takes issue with the framing of tomorrow’s event:
(Click on the image – or this link – to find out more about tomorrow’s event, and to register to take part.)
But consider: to what extent should humanity be seeking to “collaborate” with fire, rather than to control fire, keeping it in well controlled locations?
To what extent should individual humans be seeking to “collaborate” with a cancerous tumour that is growing in parts of their body?
These questions show that collaboration depends on various preconditions. If these preconditions break down, the possibilities for mutual benefit recede. If military systems fall under the charge of AI systems that might have bugs in their operation, we’ll be in a much more dangerous world. Likewise if a sophisticated AI system gains wide access to our social media networks and spreads ingeniously deceptive messages.
To say that we should “control” AI means, in part, that we should avoid such circumstances from coming into being.
That control is, indeed, likely to require lots of collaboration:
- Collaboration between designers of different AI systems
- Collaboration between humans and well-defined “narrow” AI systems whose operation is well understood, and whose calculations can assist humans in the design and control of general AIs.
But we cannot avoid talking about control. If we fail to plan for control, it’s pretty likely that we’ll end up being controlled against our own will.
2.) What about enslavement?
At this point in the discussion, let me share one of the answers given by a respondent to our recent survey on the rise and implications of AGI:
Good heavens! There’s no choice listed that supports the idea of AGI as an independent sentient being that should be treated with dignity. The discourse around this issue is simply awful. Imagine if instead of AGI, you were talking about someone’s teenager who is perhaps up to no good. Would you want them to be restricted by draconian laws, or governed strictly? Do you think that would be a good way to further the teenager’s character development? AGI is the same. If we are to expect positive character development, the foundation needs to be one of dignity and respect.
This answer is indignant. I wonder what thoughts it brings to your mind.
One response is to distinguish sentience and intelligence. An AGI will have a greater general intelligence than any human, but is very likely to lack any inner sentience.
A second response is to say that, even if an AGI possesses sentience, that doesn’t mean we should remove all constraints and controls. A human teenager who somehow gained superhuman intelligence shouldn’t be left in charge of the world’s nuclear missile systems, or our social media networks. Just because the teenager is superintelligent doesn’t mean all their actions will advance general human wellbeing.
There’s plenty more than can be said about these issues. I’m therefore looking forward to a spirited conversation tomorrow. The speaker, Tony Czarnecki of the Sustensis think tank, has given a great deal of thought to the entire subject of controlling superintelligent AI. Equally, many audience members have pondered long and hard over these issues too.
3.) Podcast: The Future of You
A few weeks back, I was one of the guests on an episode of a new podcast series, “The Future of You”. Here’s how the series is described:
Here we’re going to investigate and analyse all the ways emerging technologies are going to affect our identity. We used to argue about whether personal identity was in the mind or in the body; But now that psychology of the self and the biology of self has been joined by a third dimension – the technology of the self.
Host of the series, futurist Tracey Follows, continues her introduction:
In a digital, data-driven world, Facebook gets a say in verifying who we are, medical science can alter our genetic make-up, and AI will help us create our digital twin – just how many identities do we need in the multiverse? How is 21st Century technology changing who we are?
I set out to research exactly that, when I wrote my book “The Future of You”. That turned out to be only the start of my journey into our changing identity in a digital world. And I thought rather than continue to research this on my own, I could invite you along too…
I believe identity is the defining issue of our generation. So every fortnight I’ll be releasing new episodes, featuring panels of experts as we delve into topics such as how we use media to express our selves, the emergence of the extreme self, transhumanism and the augmented self; anonymity, genetics, biometrics, mind clones, and the digital afterlife – these are just some of the innovations, and controversies, we’ll cover.
When I appeared on the show, it was part of a fascinating three way discussion with Zoltan Istvan, original founder of the U.S. Transhumanist Party, and show host Tracey Follows. Our subject on that occasion was “The Transhumanist Future of Radical Identity”.
Dr Natasha Vita-More’s CV reads like a transhumanist historical timeline, stretching back to writing The Transhumanist Manifesto in 1983.
A prominent transhumanist voice since then, she’s still leading the pursuit of a future in which transhumanism affords us a longer, healthier life free from human limits and human suffering.
Natasha is an extraordinary guest for me to have hosted on The Future of You and we discuss perceptions of time, AI augmentation for our senses and emotional responses, practical developments in transhumanism, and what it’s like to be a pioneering woman in the transhumanist space.
There’s plenty more on Tracey’s website – material that will stimulate new thoughts about some truly important subjects. Explore it here.
4.) The Human Podcast
Another podcast that explores a wide range of human issues is “The Human Podcast”:
The Human Podcast is a new show that explores the lives and stories of a wide range of individuals. New episodes are released every week – subscribe to stay notified.
The host of the show, Joe Murray, interviewed me recently. Joe asked me several questions that I can’t remember featuring in any previous interview I’ve done. He therefore guided me outside of my usual set of answers, bringing me along what could be called “the road less travelled”.
Transcending our usual set of answers, and approaching the road less travelled, are skills we all need to improve. We’re not going to reach a better future merely by travelling on cruise control!
// David W. Wood
Chair, London Futurists