Our make-or-break century

Dear Futurists,

“Staying as we are” isn’t a viable option for us humans in the remainder of this century. Nor is there an option to somehow revert to an imagined earlier, simpler form of human civilisation. Fast-changing technology and global industrialisation are raising challenges that are almost too difficult for us to handle.

The solutions to these challenges will require us to become better humans – to become clearer thinkers, less governed by neolithic emotions, more open to counterintuitive changes in our social structures, and more capable to absorb and manage the full benefits of emerging technologies. These transformations are no mere cosmetic tweaks. They will reconfigure many aspects of what it means to be human. Indeed, to survive, we’ll need to become transhuman.

That’s an argument that comes through loud and clear in the forthcoming book Future Superhuman by Elise Bohan, Senior Research Fellow at Oxford University’s Future of Humanity Institute. The book goes on sale on 1st May, but I was fortunate enough to receive an advance copy.

Elise has kindly agreed to present some of the ideas from her book at the London Futurists webinar this Saturday, 23rd April. To find out more about this webinar, as well as other current London Futurists projects, please read on.

1.) Future Superhuman

Elise readily acknowledges that many readers may find aspects of her “future superhuman” message to be shocking. But she takes care to guide readers to see things from her point of view. She gives plenty of personal examples. She writes in a way that I found to be engaging, provocative, well-informed, and refreshingly different from many of the other books that address transhumanist and/or technoprogressive themes. I enjoyed her book all the way through – both “Part 1: How to think like a transhumanist” and “Part 2: A species in transition”.

Most of the time, I mainly found myself nodding in agreement with the points made in the book, and marvelling at the turn of phrase and the way the arguments were put together.

A couple of chapters toward the end of the book made me pause. They raised points I wasn’t particularly expecting, and challenged my own thinking. It’s likely that our event on Saturday will spend at least some time on each of these two issues: “A generation of kidults” and “The end of having babies”. That said, the topics that feature in the discussion will (as always for our events) depend on the questions and opinions raised in real time by audience members participating via Zoom.

Click here for more details about this event and to register to attend.

2.) Toward a minimal viable product

Never mind our “make-or-break” century. In my view, it’s the next couple of decades that will probably determine whether we make it, or whether we end up broken.

And key to the outcome will be whether enough people

  • Gain an sufficient understanding of the enormous issues facing us
  • Can acquire and manifest a rich set of skills.

It is the purpose of the London Futurists Vital Syllabus project to comprehensively support both of these tasks.

Over the last few weeks, the number of videos selected for inclusion in the Vital Syllabus webpages has increased steadily. The following picture gives the latest breakdown (click it to see an enlarged version).

An open online meeting on Tuesday 12th April featured an initial short (9 minutes) project update presentation from me:

That was followed by a very useful set of comments from attendees. The discussion shaped my intention to launch what can be called a “minimal viable product” version of the Syllabus, probably some time in June or July:

  • Hosted on its own dedicated website
  • Featuring just one of the 24 top-level areas, namely “Singularity”
  • Supported by a user registration system, with users encouraged to give ratings for the different videos they view
  • Also supported by improved navigational aides and short descriptions of each video included.

There is more information about the progress of the project on the #education channel of the London Futurists Slack. That’s also the place where you can give feedback about any of the videos that have already been selected into the project, and/or nominate other material for inclusion. All feedback will be very welcome!

3.) Two angles on the AI control problem

How might we control AI, before AI controls us?

That’s the title of our webinar on Saturday 30th April, featuring Tony Czarnecki, Managing Partner at the Sustensis think tank.

Tony believes that the issues arising from the rise of artificial superintelligence are more urgent than is generally assumed, and that we need to be truly open-minded to perceive the best solutions. He says that “There is no perfect method of controlling AI. However, there may be one approach which gives us sufficient control… but at a price”.

To find out more about Tony’s analysis, and to have the opportunity to offer your ideas in response as part of the group discussion, please register to attend this webinar.

A different line of thinking about the AI control problem is that the need for control can be avoided, provided AI is created with goals that are sufficiently aligned with human wellbeing.

These ideas will feature in a different London Futurists webinar, happening on Saturday 14th May. This time, the speaker is Stuart Armstrong, co-founder and Chief Research Officer of the recently launched Aligned AI initiative.

The Aligned AI website states the following:

Algorithms are shaping the present and will shape the future ever more strongly. It is crucially important that these powerful algorithms be aligned – that they act in the interests of their designers, their users, and humanity as a whole. Failure to align them could lead to catastrophic results.

Aligned AI is a benefit corporation dedicated to solving the alignment problem – for all types of algorithms and AIs, from simple recommender systems to hypothetical superintelligences. The fruits of this research will then be available to companies building AI, to ensure that their algorithms serve the best interests of their users and themselves, and do not cause them legal, reputational, or ethical problems.

In other words, building aligned AI could be the difference between “making” and “breaking” the future of humanity.

To decide whether you agree – and to find out how to support the Aligned AI project – click here to register for this webinar.

Note: both these events – 30th April and 14th May – are being co-hosted by Fast Future and the UK node of the Millennium Project, in addition to London Futurists.

// David W. Wood
Chair, London Futurists

This entry was posted in Newsletter and tagged , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.