Controversies over “control”

Dear Futurists,

I’ve received varying feedback about the title of the event we’re holding tomorrow.

Some of you like the title: “How might we control AI, before AI controls us?” You accept the need to apply restrictions to AI.

But others among you disagree. Control is the wrong metaphor, I’ve been told. Worse, the idea fails the basic test of humanity: it implies a willingness to enslave an emerging sentience. Shouldn’t humanity have gone past the phase of slavery by now?

To dip into these controversies, and for some news about projects likely to interest London Futurists, please read on.

1.) Limits to collaboration

Isn’t collaboration preferable to competition?

Doesn’t competition lead to a waste of resources, to unnecessary mental hostility, and to risks of runaway arms races?

I have a lot of sympathy with that viewpoint. Here’s an excerpt from an article I wrote as long ago as 2005, about the need for cooperation and openness within the mobile phone industry:

Foreseeing the risks of a closed approach to smartphone development, the mobile phone industry came together to create Symbian [in 1998]. The name “Symbian” is derived from the biological term “symbiosis”, emphasising the positive aspects of collaboration. Symbian’s motto is “cooperate before competing”.

Therefore, instead of seeking to control and constrain the actions of new AI systems, isn’t it better to cooperate with these intelligences?

That line of thinking, therefore, takes issue with the framing of tomorrow’s event:

(Click on the image – or this link – to find out more about tomorrow’s event, and to register to take part.)

But consider: to what extent should humanity be seeking to “collaborate” with fire, rather than to control fire, keeping it in well controlled locations?

To what extent should individual humans be seeking to “collaborate” with a cancerous tumour that is growing in parts of their body?

These questions show that collaboration depends on various preconditions. If these preconditions break down, the possibilities for mutual benefit recede. If military systems fall under the charge of AI systems that might have bugs in their operation, we’ll be in a much more dangerous world. Likewise if a sophisticated AI system gains wide access to our social media networks and spreads ingeniously deceptive messages.

To say that we should “control” AI means, in part, that we should avoid such circumstances from coming into being.

That control is, indeed, likely to require lots of collaboration:

  • Collaboration between designers of different AI systems
  • Collaboration between humans and well-defined “narrow” AI systems whose operation is well understood, and whose calculations can assist humans in the design and control of general AIs.

But we cannot avoid talking about control. If we fail to plan for control, it’s pretty likely that we’ll end up being controlled against our own will.

2.) What about enslavement?

At this point in the discussion, let me share one of the answers given by a respondent to our recent survey on the rise and implications of AGI:

Good heavens! There’s no choice listed that supports the idea of AGI as an independent sentient being that should be treated with dignity. The discourse around this issue is simply awful. Imagine if instead of AGI, you were talking about someone’s teenager who is perhaps up to no good. Would you want them to be restricted by draconian laws, or governed strictly? Do you think that would be a good way to further the teenager’s character development? AGI is the same. If we are to expect positive character development, the foundation needs to be one of dignity and respect.

This answer is indignant. I wonder what thoughts it brings to your mind.

One response is to distinguish sentience and intelligence. An AGI will have a greater general intelligence than any human, but is very likely to lack any inner sentience.

A second response is to say that, even if an AGI possesses sentience, that doesn’t mean we should remove all constraints and controls. A human teenager who somehow gained superhuman intelligence shouldn’t be left in charge of the world’s nuclear missile systems, or our social media networks. Just because the teenager is superintelligent doesn’t mean all their actions will advance general human wellbeing.

There’s plenty more than can be said about these issues. I’m therefore looking forward to a spirited conversation tomorrow. The speaker, Tony Czarnecki of the Sustensis think tank, has given a great deal of thought to the entire subject of controlling superintelligent AI. Equally, many audience members have pondered long and hard over these issues too.

Acknowledgements: the above images include work by Pixabay members Gerd Altmann and Stefan Keller.

3.) Podcast: The Future of You

A few weeks back, I was one of the guests on an episode of a new podcast series, “The Future of You”. Here’s how the series is described:

Here we’re going to investigate and analyse all the ways emerging technologies are going to affect our identity. We used to argue about whether personal identity was in the mind or in the body; But now that psychology of the self and the biology of self has been joined by a third dimension – the technology of the self.

Host of the series, futurist Tracey Follows, continues her introduction:

In a digital, data-driven world, Facebook gets a say in verifying who we are, medical science can alter our genetic make-up, and AI will help us create our digital twin – just how many identities do we need in the multiverse? How is 21st Century technology changing who we are?

I set out to research exactly that, when I wrote my book “The Future of You”. That turned out to be only the start of my journey into our changing identity in a digital world. And I thought rather than continue to research this on my own, I could invite you along too…

I believe identity is the defining issue of our generation. So every fortnight I’ll be releasing new episodes, featuring panels of experts as we delve into topics such as how we use media to express our selves, the emergence of the extreme self, transhumanism and the augmented self; anonymity, genetics, biometrics, mind clones, and the digital afterlife – these are just some of the innovations, and controversies, we’ll cover.

When I appeared on the show, it was part of a fascinating three way discussion with Zoltan Istvan, original founder of the U.S. Transhumanist Party, and show host Tracey Follows. Our subject on that occasion was “The Transhumanist Future of Radical Identity”.

More recently, Tracey had as her guest Natasha Vita-More, the Executive Director of Humanity+. The topic was “Superlongevity, Memory and Identity”:

Dr Natasha Vita-More’s CV reads like a transhumanist historical timeline, stretching back to writing The Transhumanist Manifesto in 1983.

A prominent transhumanist voice since  then, she’s still leading the pursuit of a future in which transhumanism affords us a longer, healthier life free from human limits and human suffering.

Natasha is an extraordinary guest for me to have hosted on The Future of You and we discuss perceptions of time, AI augmentation for our senses and emotional responses, practical developments in transhumanism, and what it’s like to be a pioneering woman in the transhumanist space.

There’s plenty more on Tracey’s website – material that will stimulate new thoughts about some truly important subjects. Explore it here.

4.) The Human Podcast

Another podcast that explores a wide range of human issues is “The Human Podcast”:

The Human Podcast is a new show that explores the lives and stories of a wide range of individuals. New episodes are released every week – subscribe to stay notified.

The host of the show, Joe Murray, interviewed me recently. Joe asked me several questions that I can’t remember featuring in any previous interview I’ve done. He therefore guided me outside of my usual set of answers, bringing me along what could be called “the road less travelled”.

Transcending our usual set of answers, and approaching the road less travelled, are skills we all need to improve. We’re not going to reach a better future merely by travelling on cruise control!

// David W. Wood
Chair, London Futurists

This entry was posted in Newsletter and tagged , , , , . Bookmark the permalink.

2 Responses to Controversies over “control”

  1. Graham McNally says:

    David, A short comment once again. What I can see here in following a route to “controlling AI” is a near “God’ like situation emerging for humanity. The issue at hand as I see it is revealed in your Symbian motto of “cooperate before competing”. I have found myself working close to commodity trade for some years. I have found myself saying on several occasions that if we are truly committed to sustainability then we must now enter an era of a new operating paradigm and here in this discussion I sense a symbiosis in my train of thought. This is where we enter an era in which competition fails and profitability is managed. We enter an era of honesty, and transparency where all information is visible to all value chain actors. The competition driven profit model evaporates to a model where a fair profit is made and agreed to by all actors in the value chain, as “There is no sustainability without profitability”-. I believe that the ‘Blockchain’ and emergence of the hyperledger will contribute substantially to this-. Where is this guy going with this, do I hear you say?. I flip back to AI; and the crux of the matter is if humanity is the “God” of AI and thereby creates AI as an “image of himself”. Biblically this term has its roots in Genesis 1:27, wherein “God created man in his own image”. This scriptural passage does not mean that God is in human form, but rather, that humans are in the image of God in their moral, spiritual, and intellectual nature. I therefore put it that our AI is being created in the image of ‘Humanity’, in their moral, spiritual, and intellectual nature. Then, I believe, we have an opportunity at this juncture in time to create an AI in the current “profit driven, resource wasting, non cooperating, throat cutting image of humanity” that will need to be controlled; or to hold off a little until ‘Humanity’ recognizes that there is an alternative; sustainable, honesty & transparency, driven paradigm, -potentially- emerging image of humanity” which aligns with your Symbian motto. An image of ‘Humanity’ if used to create AI, which may not require the control we are concerned about and discussing here. Leading on from this; the question I put to this weekend forum is; in which image of ‘Humanity’ do we wish to create our AI? I trust that this ‘helps’ with your discussions a little.

    Kind Regards,

    *Graham Mcnally* Agricultural Production Manager, NGIP Group of Companies Mobile: +675 7178 2993 Email: Trunk Line: 7411 1424 | 982 9055 Ext: 148

    NGIP Haus, Level 1, Talina, Kokopo P.O.Box 1921, Kokopo, ENBP Papua New Guinea

    • David Wood says:

      Hi Graham – Many thanks for sharing your thoughts. I see things broadly similarly: the AI we create will demonstrate the characteristics of the world in which AI emerges. We are currently training AI in the arts of selling, manipulating, spying, killing, and gambling (both Las Vegas style and Wall Street style). So we should not be surprised if AI grows up to treat us all badly.

      Also as you point out, if we can manage to set sufficiently positive values deep into the AI, we shouldn’t have to worry about controlling it. That will actually be the theme of the London Futurists event happening in two Saturday’s time:

      In the meantime, for other thoughts along similar lines, take a look at the videos on

      With best wishes

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.