AI needs philosophy

Dear Futurists,

To help us with AI, we need philosophy.

That was one of the conclusions that emerged, time and again, during discussions at TransVision 2024 in Utrecht just over two weeks ago.

To be clear, attendees did not expect the ideas from previous generations of philosophers (see Midjourney-imagined depiction above) to comprehensively solve all of the challenges thrown up in the 2020s by fast-evolving AI.

However, there was a wide recognition that people trained in philosophy could bring much needed insight to these conversations.

One of my roles at that conference was as a facilitator of focus-group conversations, on the final afternoon, about what people had heard from earlier sessions. Over the course of 90 minutes, five different groups formed at the table, and each had plenty to say.

I documented a brief summary of the points arising here.

As you can see, this is from the third slide in that summary:

The conversations kept reaching questions of philosophy:

  • “What should we want to want?”
  • “What is consciousness, and might it arise in AIs?”
  • “What is the meaning of meaning?”
  • “What is the meaning of autonomy / agency / free will, and might it arise in AIs –potentially making control or alignment harder?”
  • “How can we understand and evaluate our own biases?”

For some potential answers, read on.

1.) Anil Seth and Conscious AI – TODAY

Until relatively recently, I tended to try to sidestep conversations about AI becoming conscious.

These discussions weren’t productive, I thought, and they took attention away from the numerous scenarios in which AI can radically change humanity without AI becoming conscious.

However, I now expect 2024 to be the year in which conversation about AI consciousness surges.

That’s because I see some useful progress after all, from a number of researchers.

It’s also because questions of possible emergent behaviour of large AI systems are becoming more urgent.

Recent new LLMs (Large Language Models) have displayed features that their designers had not expected. Rather than these features being designed into the LLMs, they seem to have arisen from higher-level interactions of individual components.

The possibility needs to be considered: will the next generation of LLMs feature additional emergent properties, such as autonomy, agency, and free-will? In that case, will they prove harder to control and/or align?

Might they have an unexpected capacity to experience pain and terror? Or might they show an unexpected empathy toward all sentience, and change their behaviour accordingly?

Friends of London Futurists have created a new meetup – Conscious AI – where these questions will receive some careful consideration.

That meetup has its first gathering – online in Zoom, later today, at 5pm UK time

The speaker will be Anil Seth, Professor of Cognitive and Computational Neuroscience and Director of the Sussex Centre for Consciousness Science at the University of Sussex.

His talk will be entitled “”Prospects and pitfalls for Conscious AI”. Here’s how Anil describes it:

As AI continues to develop, it is natural to ask whether AI system can be not only intelligent, but also conscious. In this talk, I will first consider why some people think AI might develop consciousness, identifying some biases that might lead us astray. I’ll then ask what it would take for conscious AI to be a realistic prospect, pushing back against some standard assumptions such as ‘computational functionalism’. I will instead make the case for taking seriously the possibility that consciousness might depend on our nature as living organisms – a form of ‘biological naturalism’. I’ll end by exploring some ethical implications of AI that either actually is, or convincingly seems to be, conscious.

For more details, and to register to attend, click here.

2.) Robots and the people who love them

Another way in which ideas on artificial consciousness will have practical consequences is in people’s interactions with what are called “social robots”.

These are robots that can detect and display emotions, and adjust their interactions accordingly.

In principle, social robots can take roles such as nannies, friends, therapists, caregivers, and lovers.

They’re the subject of an important new book by Eve Herold, entitled “Robots and the People Who Love Them: Holding on to Our Humanity in an Age of Social Robots”.

Eve is the guest in the latest London Futurists Podcast episode. Click the image to listen to the episode on Buzzsprout, or access it from wherever you usually find your podcasts:

For the time being, it’s generally assumed that any human interaction with a social robot will involve only the impression of genuine emotional response from the robot. As Eve says in the podcast, human emotional projections toward the robot are going “into the void”.

But might this change if progress is made by the kind of research covered by the Conscious AI meetup?

As I said, expect that question to arise with increasing frequency as 2024 proceeds.

3.) AI and the future of work and the future of the economy

On the subject of new books, another new book is available, covering some of the potential sweeping economic consequences of AI. My name is on the cover, as the author of the opening chapter, which sets the scene for contributions by ten other writers with their own absorbing ideas:

The full title of the book is Future Visions: How to Survive and Thrive in the Upcoming Economic Singularity: A Variety of Perspectives.

Many thanks to the OmniFuturists for undertaking this timely project!

For a general intro to the ideas in the book, you will find this video recording useful, from a recent U.S. Transhumanist Party Virtual Enlightenment Salon.

(my presentation in this video runs from 10:15 to 34:55).

4.) Sceptical questions about radical life extension

I know that many readers of this newsletter have various reservations about the ideas of radical life extension and, more poetically, “the death of death”.

If that topic interests you, here’s an episode from a TV channel that broadcast live in Texas on Sunday, and which is now available worldwide on YouTube.

The show is from the “Perspectives Matter” series. This episode features host Dennis McCuistion asking some excellent sceptical questions. José Cordeiro and I offer our responses!

If you liked that episode, watch out for Part Two of the same conversation, which will become available next Sunday.

// David W. Wood
Chair, London Futurists

Unknown's avatar

About David Wood

Chair of London Futurists. Principal of Delta Wisdom
This entry was posted in Newsletter and tagged , , , , , , , , , , , , , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.