Priorities for 2025 and beyond

Dear Futurists,

What should we do less in 2025, to allow ourselves more time and energy to increase our focus on other matters that we decide deserve more of our attention?

Here, by the word “we”, I mean each of us as individuals, but also “we” as a community of futurists – and, furthermore, “we” as members of the human civilisation – as neighbours, friends, colleagues, citizens, and fellow adventurers.

To answer such a question, it will help if we are able to anticipate the developments that could significantly transform our situation in forthcoming months – the trends, innovations, disruptions, shocks, and opportunities, that could make the biggest difference.

That’s the topic of our online event this Saturday, 21st December. Read on for more news about that event, and about other events and projects with important futurist angles.

1.) London Futurists pre-Christmas online get-together

This Saturday, you have the opportunity to join our pre-Christmas get-together in the online Telepresent environment.

We’ll be hearing short provocations from three excellent speakers:

  • Rohit Talwar, Global Futurist at Fast Future
  • Mihaela Ulieru, Chief AI Alchemist at SingularityNET
  • Peter Scott, Podcast host, AI and You

Following each of these talks, you’ll be able to select a table and join the conversation that’s taking place there. (The names of the various tables aren’t meant to constrain what’s said.) By double-clicking at another table (if you spot someone sitting there that you’d like to join) you will hyperleap into a new conversation.

At the end of the formal part of event, everyone will have the chance to come up to the main stage to share a key insight or new piece of information that has come to their mind during the event.

It’s pleasing to see that over 50 people have already registered on the Meetup page. However, to make it into the Telepresent environment, you’ll need to pass through this gateway, buymeacoffee.com/telepresent/extras, and click on the logo to send Iain Beveridge a £5 gift (to help him offset the costs he incurs) before reaching the actual Telepresent registration page.

In case any of you are particularly hard up, let me know, and I can provide you with a short-cut for you to bypass the “buy me a coffee” page.

Whichever way you reach the page, bear in mind that Telepresent is (evidently) different from Google Meet, Zoom, Microsoft Teams, and so on. If you are running lots of other software on your computer (e.g. VPN, enterprise monitoring, other audio-visual apps) you may hit difficulties. So, please, don’t leave it to the last minute to visit the site and register.

In the meantime, you can also visit this page to check how well your computer meets the requirements for a good Telepresent experience.

2.) Transhumanism: Future scenarios

One thing I’ve already decided to do more of, in 2025, is supporting up-to-date well-informed public discussion of the potential future of transhumanism.

I’ve been describing myself as a transhumanist for around 20 years. Twenty years ago, if I had been asked to look forward to 2024, I think I would have underestimated the degree of progress made by AI, overestimated the progress made by nanotechnology, underestimated the extent to which smartphones would be used 24×7 in everyone’s lives, and overestimated the extent to which the philosophy of transhumanism would move into the mainstream.

So why do so few people describe themselves as transhumanist? Will it be the same as I (still) believe to be the future of nanotechnology, namely a slow, disappointing phase followed by a phase with much faster progress? That is, will transhumanism sweep the world of ideas in the remainder of this decade? Or has the opportunity for transhumanism (as an explicit philosophy) now passed?

These thoughts lie behind the new London Futurists initiative: “Transhumanism: Future scenarios”, with its invitation to speakers to consider seven key questions:

  1. Which future scenarios involving transhumanism are the most likely or the most desirable?
  2. Is it time to retire the term “transhumanism”, or is that term poised for a new take-off?
  3. Which transhumanist projects most deserve attention and support?
  4. To what extent are diverse voices a source of strength for the transhumanist community, and to what extent do they risk diluting focus?
  5. Which criticisms of transhumanism are confused, which are malevolent or deliberately contrarian, and which are actually catalytic?
  6. Does transhumanism have any special insight on whether technology should be accelerated, harnessed, or even paused?
  7. In a world of populism and turmoil, does transhumanism provide a true north, or is it the opium of the people?

The webinars in this series will be recorded and made available for viewing, as a public record of the state of transhumanism in 2025.

To launch this new series, we are very fortunate to welcome Professor Stefan Lorenz Sorgner as the first guest speaker, on Saturday 25th January. You can find more details here.

Initially I thought there might be around four webinars in this series throughout 2025, or perhaps as much as eight. But when I started to make a list of potential speakers – people who have something thoughtful, engaging, and challenging to say about the future of transhumanism – I quickly reached more than twenty names.

For now, I’ll simply say: let’s see how it goes!

3.) Highlights from London Futurists Podcast, 2024

In 2024 so far, 37 new episodes of London Futurists Podcast have been released.

(And there will probably be two episodes more released, before the year completes.)

Buzzsprout, which hosts these episodes and distributes them to various podcast apps, have recently let me know a few statistics for our podcast downloads during 2024.

Buzzsprout rates us as comfortably within the top 20% of all podcasts they host, and just outside the top 10%. Personally I’m happy with that: absolute numbers aren’t our highest priority; it’s the reach of important new ideas.

There’s no surprise that the country which had more downloads than any other was the UK. Here are the top five, in decreasing order:

  1. United Kingdom
  2. United States
  3. Australia
  4. Germany
  5. Canada.

Altogether we had downloads in 118 different countries in 2024 – which officially makes the show, in Buzzsprout language, “world famous”.

When the breakdown extends one more layer, to “city”, the figures are more surprising:

  1. Melbourne, Victoria, Australia
  2. Sydney, New South Wales, Australia
  3. London, England
  4. Fulham, England
  5. Southwark, England

(Personally I would have counted both Fulham and Southwark as part of London, but I guess podcast statistics must have their own rules!)

The individual episodes that were most popular (again, starting with the most popular of all) were:

  1. Progress with ending aging, with Aubrey de Grey
  2. What’s it like to be an AI, with Anil Seth
  3. The Longevity Singularity, with Daniel Ives
  4. Can AI be conscious? with Nicholas Humphrey
  5. Rejuvenation biotech – progress and potential, with Karl Pfleger

Notice any patterns here? Yes – a strong interest in episodes featuring longevity, and again for episodes featuring the possibility that AIs could become conscious.

4.) Principles for Responsible AI Consciousness Research

Speaking of the possibility that AIs could become conscious, allow me to highlight the recent publications of a set of “Principles for Responsible AI Consciousness Research”.

You can find them on the website of Conscium – where (full disclosure) I am listed as one of the advisors.

I’ve also added my name as one of the official signatories of these principles.

If you read the principles and agree with them, please considering signing them too.

For convenience, I append below this newsletter a copy of the statement of principles.

// David W. Wood – Chair, London Futurists

It is possible that in the coming years or decades, AI researchers will develop machines which experience consciousness. This will raise many ethical questions. Conscium has worked with Patrick Butlin of Oxford University to draw up the following five principles to guide any organisation engaged in research that could lead to the creation of conscious machines. If you agree with them, we invite you to sign the open letter further down this page.

Statement of Principles

  1.  Objectives: Organisations should prioritise research on understanding and assessing AI consciousness with the objectives of (i) preventing the mistreatment and suffering of conscious AI systems and (ii) understanding the benefits and risks associated with consciousness in AI systems with different capacities and functions.
  2. Development: Organisations should pursue the development of conscious AI systems only if (i) doing so will contribute significantly to the objectives stated in principle 1 and (ii) effective mechanisms are employed to minimise the risk of these systems experiencing and causing suffering.
  3. Phased approach: Organisations should pursue a phased development approach, progressing gradually towards systems that are more likely to be conscious or are expected to undergo richer conscious experiences. Throughout this process, organisations should (i) implement strict and transparent risk and safety protocols and (ii) consult with external experts to understand the implications of their progress and decide whether and how to proceed further.
  4. Knowledge sharing: Organisations should have a transparent knowledge sharing protocol that requires them to (i) make information available to the public, the research community and authorities, but only insofar as this is compatible with (ii) preventing irresponsible actors from acquiring information that could enable them to create and deploy conscious AI systems that might be mistreated or cause harm.
  5. Communication: Organisations should refrain from making overconfident or misleading statements regarding their ability to understand and create conscious AI. They should acknowledge the inherent uncertainties in their work, recognise the risk of mistreating AI moral patients, and be aware of the potential impact that communication about AI consciousness can have on public perception and policy making.
Unknown's avatar

About David Wood

Chair of London Futurists. Principal of Delta Wisdom
This entry was posted in Newsletter and tagged , , , , , , , , , , , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.