Dear Futurists,
In this newsletter, you’ll find some thoughts about dangerous beliefs: how they form, how they spread, how they may be opposed, and why they sometimes deserve our support, despite the opposition they face.
(The image, by DALL.E.2, is an AI artist’s impression of the spread of a dangerous belief.)
1.) Tomorrow: The Misinformation Age
Which beliefs, now recognised as false, have caused the greatest harm in recent decades?
Which beliefs, equally false, might cause even greater harm in the near future?
How do such beliefs spread, despite the harm that they cause?
And most important: what can be done to counter the impact of such beliefs?
That’s a discussion we’ll be picking up in our webinar tomorrow (Saturday): “How misinformation spreads”.
Our speaker, Cailin O’Connor, is a philosopher of biology and behavioural sciences, philosopher of science, and evolutionary game theorist. She is a Professor in the Department of Logic and Philosophy of Science, and a member of the Institute for Mathematical Behavioral Science at UC Irvine.
In addition to co-authoring the book The Misinformation Age: How False Beliefs Spread, Cailin is the author of the 2019 monograph The Origins of Unfairness and the 2020 book Games in the Philosophy of Biology.
The discussion will look at some “unexpected social dynamics” that “are what’s essential to understanding the spread and persistence of false beliefs”.
Click here for more details about the event, and to register to attend.
2.) The world’s most dangerous idea?
(The illustration is “The world’s most dangerous idea” as imagined by AI artist Midjourney.)
The phrase “the world’s most dangerous idea” was used in an essay in 2004 by Francis Fukuyama, professor of political science. (That essay doesn’t exist online, as far as I can tell, but Fukuyama returned to the same theme in this 2009 article.)
Fukuyama applied that phrase to the philosophy of transhumanism, describing it as “a strange liberation movement” whose advocates “want nothing less than to liberate the human race from its biological constraints”.
The terrible danger of transhumanism, according to Fukuyama, is that “Modifying any one of our key [human] characteristics inevitably entails modifying a complex, interlinked package of traits, and we will never be able to anticipate the ultimate outcome…. We may unwittingly … deface humanity with … genetic bulldozers and psychotropic shopping malls”.
Similar objections could have been raised against, for example, painkillers, anaesthetics, meditation, and changes in diet. All these practices, likewise, seek to modify aspects of “a complex, interlinked package” of human characteristics. All of them could, indeed, have side-effects. But that would be no reason to entirely abandon these practices.
Speaking up against poor criticisms of transhumanism has been a recurring theme for the guest in the most recent episode of the London Futurists Podcast, Natasha Vita-More.
In what was a very personal conversation, Natasha spoke in general about “overcoming limitations”, and specifically about the responses she helped to coordinate against critics of transhumanism such as Fukuyama.
Click here to find the interview with Natasha, along with the (so far) 23 other episodes of the podcast. There’s usually a new episode very Wednesday morning.
3.) Real-life gathering: Our Superhuman Future
If you’re still unsure whether the ideas of transhumanism are dangerous, and should be vigorously opposed, or instead emancipatory, you’ll have a chance on Thursday 16 February to watch three prominent UK-based transhumanist commentators in live discussion.
This event is taking place in the St Pancras Room of Kings Place, 90 York Way, London N1 9AG. It is part of the FUTURES Podcast series and is hosted by Luke Robert Mason – who is himself no stranger to the world of transhumanism.
Here’s how the event is described on the Kings Place website:
Transhumanists believe that the only way for humanity to survive in the future is to merge with advanced technology. Today’s rapid developments in gene-editing and artificial intelligence point to the potential for a ‘humanity 2.0’ to upload their minds, enhance their bodies, create robot companions and have babies outside the womb. Ideas that were previously considered science fiction are fast becoming a reality. When might we expect these developments, and what ethical issues will they create?
Join Elise Bohan (University of Oxford) Prof. Steve Fuller (University of Warwick) and Anders Sandberg (Future of Humanity Institute) to discover what these possibilities mean for the future of humankind.
Elise Bohan is a Senior Research Scholar at the University of Oxford’s Future of Humanity Institute (FHI). She holds a PhD in evolutionary macrohistory, wrote the world’s first book-length history of transhumanism as a doctoral student, and recently launched her debut book Future Superhuman: Our transhuman lives in a make-or-break century (NewSouth, 2022).
Prof. Steve Fuller is Auguste Comte Professor of Social Epistemology at the University of Warwick, UK. Originally trained in history and philosophy of science, he is the author of more than twenty books. From 2011 to 2014 he published three books with Palgrave on ‘Humanity 2.0’. His most recent book is Nietzschean Meditations: Untimely Thoughts at the Dawn of Transhuman Era (Schwabe Verlag, 2020).
Anders Sandberg is a Senior Research Fellow at the Future of Humanity Institute (FHI) at Oxford University where his research focuses on the societal and ethical issues surrounding human enhancement and new technologies. He is also research associate at the Oxford Uehiro Centre for Practical Ethics and the Oxford Centre for Neuroethics.
Not only is this event a good chance to learn about the practical issues raised by transhumanism. It’s also a chance for members and friends of London Futurists to gather socially.
I look forward to seeing many of you there!
4.) The Vital Syllabus: A request
I’ll close by returning to the question: What can be done to respond better to potentially dangerous ideas?
For example:
- How can we all become more immune to being led astray by faulty logic, deceptive praise, media distortions, social pressures, and other forms of artful manipulation?
- How can we gain the objectivity to distinguish profound insights from superficially similar statements that are, however, misleading?
That’s where the Vital Syllabus comes in. It aims to provide a wide range of tools and principles to combat disinformation.
I recently was encouraged to see this assessment of the Vital Syllabus, by Michigan-based technology forecaster Richard Brandes:
This is an amazing collection of information, I highly recommend bookmarking and taking the time to study the principles within.
That leads me to make a request of anyone who has read this far into this newsletter:
- Please briefly look through one or more of the Vital Syllabus pages – you’ll find links in this Contents listing – to find at least one video that strikes you as potentially interesting
- Decide whether you like the video, or instead find it weak or misleading
- Provide that feedback, in a simple message, on the Vital Syllabus presence on (your choice!) Facebook, LinkedIn, or Discord.
As a general rule, if two or more people give the same video a thumbs down in such feedback, and no-one gives it a thumbs up, I’ll remove it from the Vital Syllabus pages.
Similarly, if two of more people suggest including a new video on any of the Vital Syllabus pages, and no-one opposes that suggestion, I’ll act on it.
With sufficient collaborative engagement, the Vital Syllabus project can move on to gain many more endorsements!
// David W. Wood
Chair, London Futurists