It’s becoming clear to many members of the public that various AI models released in the last few months have striking new capabilities.
I’ve seen comments on social media that people are finding that, at times, at least half of their social media feed consists of reactions to ChatGPT, Stable Diffusion, Galactica, Cicero, Lensa, Dall.E 2, and other new AI releases. These reactions are frequently stunned.
So it’s no surprise that there’s a surge of interest in the implications of this apparent acceleration in the capabilities of AI. People who were formerly sceptical, or simply disinterested, are suddenly eager to understand the potential upsides and downsides of what they now appreciate is likely to be an ongoing flow of new AI applications.
It’s time, therefore, for we futurists to step up to the demand – offering our own best insights, but also being honest about the shortcomings of our present understanding.
Just as the most impressive of the new AI tools involve combinations of different architectures and algorithms, the most useful of the conversations about the future will involve combinations of different perspectives, skills, and experiences.
That’s what London Futurists seeks to enable. For some concrete examples – of meetings arranged at some haste in response to some rapidly changing ideas – please read on.
1.) AGI and the future of ethics – TODAY, Saturday 17th Dec
The latest AI chatbots can create some pretty impressive answers to philosophical questions.
One academic recently evaluated some essays produced by ChatGPT as being B+ grade for undergraduate students. In other words, as being competent, well structured, and mainly accurate, if lacking in the kind of creative originality that would merit an A grade.
Over time (perhaps with GPT4…) the essays are likely to become even more striking. It’s possible that the answers given by some chatbots will lead human experts in turn to consider aspects of their own academic field differently. In this sense, chatbots will be pushing back academic frontiers.
But let’s look further ahead. Imagine that AIs are not only generating their own answers, but are increasingly controlling actions in the world. We humans are relying on them to direct traffic flow, to operate farming machinery, to determine the best use of factory equipment on a day-to-day basis, to deal with dangerous violations of fire and safety regulations, to keep an eye on military equipment, and so on.
Imagine that such AIs reach their own conclusions about which qualities are most admirable, which are most distasteful, and what should be done to increase admirable qualities and take the sting out of distasteful ones.
Imagine, in other words, AIs with intelligence exceeding that of humans, acting on their own conclusions to remake aspects of the world.
If asked what it was doing, such an AI might reply: making the universe better.
But will the actions taken by such an AI resonate with existing human moral sentiments? Will we be inspired by what we see these AIs doing? Or will we be horrified?
And most important: to what extent can we humans influence the actions of such AIs by the design and configuration choices we make over the next few months and years?
Someone who has thought a great deal about these questions is the lead speaker at the London Futurists webinar happening at 4pm UK time today (Saturday 17th December), namely Dan Faggella. Dan is the the founder and CEO of Emerj Artificial Intelligence Research. You can find many of his online essays here. His views are distinctive and thought-provoking.
Dan and I will be joined on the panel at this webinar by two other people with key insights on the possible future trajectories of AI:
Note that as a “Christmas season special”, on this occasion there will be no charge to attend the Zoom webinar.
2.) AIs, deep fakes, and possible defences – Tuesday 20th Dec
London Futurists will be trying something different on Tuesday, between 7pm and 8pm UK time.
With the help of Matt O’Neill, we’ll be hosting an audio discussion on LinkedIn.
The starting point for this discussion is the recognition that modern AIs can produce some very impressive fakes. If they’re not careful, viewers of videos will think they are watching something that a specific politician actually said, whereas in fact the politician said no such thing.
This is part of a bigger problem: various forces are all too ready to mislead us, using media specially crafted for that purpose. Sometimes their intent is commercial or financial. Sometimes it is political. Sometimes it is to sew distrust, chaos, confusion, and partisan enmity.
We might be encouraged by the fact that some tools have been released, that can study videos, images, audio tracks, text messages, and so on, to determine whether they are trustworthy or instead deceptive.
However, we’re in the midst of a volatile arms race. An authenticity tool that spots November’s fake news may itself be deceived by cleverer fake news in December.
In this situation, what are our options?
To hear various suggestions, and to offer some of your own, take part in this audio event. The details are here. You’ll find that you’ll need a LinkedIn account.
Once you join the event, you’ll initially be in listening mode. In that mode you can type text into the comments page of the LinkedIn event, express a number of reactions graphically, or raise an electronic hand to express a desire to speak.
As I said, this is something of an experiment. Let’s work out together how to make the most of it!
Once again, there’s no charge to take part.
3.) Seventeen not out
Calum Chace and I continue to release one new episode of London Futurists Podcast each week – generally first thing on Wednesday morning.
At the time of writing, there are now 17 episodes available for download. Here are direct links to the most recent ones:
- Ep 17: Introducing Decision Intelligence, with Steven Coates (14 Dec 2022)
- Ep 16: Developing responsible AI, with Ray Eitel-Porter (7 Dec 2022)
- Ep 15: Anticipating Longevity Escape Velocity, with Aubrey de Grey (30 Nov 2022)
- Ep 14: Expanding humanity’s moral circle, with Jacy Reese Anthis (23 Nov 2022)
- Ep 13: Hacking the simulation, with Roman Yampolskiy (16 Nov 2022)
- Ep 12: Pioneering AI drug development, with Alex Zhavoronkov (9 Nov 2022)
- Ep 11: The Singularity Principles (2 Nov 2022)
- Ep 10: Collapsing AGI timelines, with Ross Nordby (26 Oct 2022)
When Calum and I first discussed, in summer, the idea of creating a series of podcast episodes, I wasn’t quite sure what to expect. I can now say that the experience has significantly exceeded my expectations. Each new episode highlights important insights and information from the frontlines of “exponential impact”.
We have a number of other episodes already recorded and many more in our pipeline. When I look at that pipeline, I sometimes feel daunted, but I also feel privileged to be able to interact and learn from so many people who are actively improving the future.
// David W. Wood
Chair, London Futurists