David Wood
banner
dw-2.bsky.social
David Wood
@dw-2.bsky.social

Chair, London Futurists. Executive Director of LEV Foundation. Author or Lead Editor of 12 books about the future. PDA/smartphone pioneer. Symbian co-founder

Medicine 66%
Economics 14%
Pinned
Hello Bluesky!
Anyone looking for mind-expanding future-focused content should find plenty to savour in londonfuturists.buzzsprout.com

My Christmas Eve message, 2025:
Use it or lose it!
Use our freedom to rise above distractions, whilst there's still a chance to alter outcomes.
So, let's make wise choices in 2026 - lest our freedoms shrink, and we no longer have options for meaningful choices from 2027 onward.

And here's the coda at the end of the coda - with the image originally included in dw2blog.com/2025/04/06/c...

The article finishes with a list of "leverage that can be harnessed now" (with links in each case).

The conclusion: "Actively harness acceleration, rather than being its slave";
Solutions to the AI Control Problem and the Oligarch Control Problem will be far easier well before the associated singularity;
Once the singularity arrives, leverage is gone

Section headings include:
The Economic Singularity – Loss of Human Economic Power;
The Technological Singularity – Loss of Human Decision Power;
Let’s trust the oligarchs?!
Let’s trust the ASIs?!
Avoid ASIs having biological motives?!
Preach morality at the oligarchs?!

Although AI can enable a world of exceptional abundance, humanity nevertheless faces catastrophic risks – not only from misaligned superintelligence, but from the small number of humans who will control near-AGI systems. That's the subject of my latest blogpost dw2blog.com/2025/12/23/t...
The Oligarch Control Problem
Not yet an essay, but a set of bullet points, highlighting an ominous comparison. Summary: Although AI can enable a world of exceptional abundance, humanity nevertheless faces catastrophic risks – …
dw2blog.com

The “Oligarch Control Problem” arguably deserves as much attention as the traditional AI Control Problem

Also covered: differences in the development environment between China and the US; China's recent initiative to establish WAICO (the World Artificial Intelligence Cooperation Organization); the growing threats of AI-powered cyber attacks; and the global role that the UK can play

In this episode, Kayla reflects on her background as a diplomat, her decision to pivot back to academia, her vision for founding OCPL (the Oxford China Policy Lab), and the successes of the Lab so far. Listen to the episode at londonfuturists.buzzsprout.com/2028982/epis...
The puzzle pieces that can defuse the US-China AI race dynamic, with Kayla Blomquist - London Futurists
Almost every serious discussion about options to constrain the development of advanced AI results in someone raising the question: “But what about China?” The worry behind this question is that slowin...
londonfuturists.buzzsprout.com

The AI race dynamic between the US, China, and the rest of the world, defies any simple characterisation. Defusing the risks of this race requires assembling a variety of different puzzle pieces. Hear from Kayla Blomquist of the Oxford China Policy Lab in the latest London Futurists Podcast.

AI safety and ethics demand global governance.
As 2025 draws to an end, check out
@GAIGAnow.bsky.social
Join us at the Global AI Governance Alliance (GAIGANow) in building an Alliance advancing effective, accountable, and inclusive global #AIgovernance. Let's steer #AI progress toward benefiting humanity & supporting a more just world 💻🌐

Learn more: gaiganow.org

Persuasive - "Is almost everyone wrong about America’s AI power problem? Why power is less of a bottleneck than you think" open.substack.com/pub/epochai/...
Is almost everyone wrong about America’s AI power problem?
Why power is less of a bottleneck than you think.
open.substack.com

For more details - covering the critical importance of RMR (Robust Mouse Rejuvenation), how RMR2 will differ from RMR1, six different LEVF headlines from 2025, and how donations and support can be magnified - see www.levf.org/december-202...
December 2025 Update — LEV Foundation
www.levf.org

From my LEVF 2025 end-of-year summary: Progress, but not happening quickly enough. Our current and past supporters have provided very generous contributions, but on our current trajectory, the comprehensive cure and prevention of age-related disease remains too far in the future

Find the episode here - and discover what's meant by the counterintuitive but transformative idea of "the zero billion dollar market" londonfuturists.buzzsprout.com/2028982/epis...
Jensen Huang and the zero billion dollar market, with Stephen Witt - London Futurists
Our guest in this episode is Stephen Witt, an American journalist and author who writes about the people driving the technological revolutions. He is a regular contributor to The New Yorker, and is fa...
londonfuturists.buzzsprout.com

To me, it's no surprise that the book won the 2025 Financial Times and Schroders Business Book of the Year Award. Ahead of recording this podcast episode, I thought I should listen to a few of its chapters. 36 hours later, I had finished the entire book - it was so interesting!

The new book by journalist Stephen Witt, "The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip", is full of absorbing insights. For a brief taste, check out the latest London Futurists Podcast episode, where the author shares some of his thinking

PS for some corrections to the fictionalised aspects of the film, see this thoughtful review by Michael Shermer www.skeptic.com/article/nure...
Nuremberg: The Film, The Trial, The Verdict
Michael Shermer reviews “Nuremberg” (2025), dir. James Vanderbilt. Starring Russell Crowe, Rami Malek, Michael Shannon, and Leo Woodall.
www.skeptic.com

Because they deal with huge themes, some films deserve to be seen on a huge screen. Nuremberg covers events in 1946 but addresses topics that will be highly relevant whenever the world attempts repairs after the current disastrous leadership in both Russia and the USA

Reposted by David Wood

One of the most important research directions for humanity's future is being pursued by LEVF – the Longevity Escape Velocity Foundation – whose Executive Director, David Wood @dw-2.bsky.social, has written an End-of-Year Update, republished on our website: transhumanist-party.org/2025/12/10/l....
LEV Foundation 2025 End-of-Year Update – U.S. Transhumanist Party – Official Website
transhumanist-party.org

Live streaming from UK's Westminster Hall: A parliamentary debate on AI Safety. Expected to include attention to questions of existential safety. From 2:30pm today UK time www.parliamentlive.tv/Event/Index/...
Parliamentlive.tv
Westminster Hall
www.parliamentlive.tv

Reposted by David Wood

Join us at the Global AI Governance Alliance (GAIGANow) in building an Alliance advancing effective, accountable, and inclusive global #AIgovernance. Let's steer #AI progress toward benefiting humanity & supporting a more just world 💻🌐

Learn more: gaiganow.org

The momentum is growing: "Scores of UK parliamentarians join call to regulate most powerful AI systems; Campaign urges PM to show independence from US and push to rein in development of superintelligence" www.theguardian.com/technology/2...
Scores of UK parliamentarians join call to regulate most powerful AI systems
Exclusive: Campaign urges PM to show independence from US and push to rein in development of superintelligence
www.theguardian.com

Scenarios for 2030 - the latest newsletter from London Futurists:
1) Beyond foresight-as-usual (15th Dec);
2) Global Catastrophic Risks Report 2026;
3) Statement on Superintelligence passes 128k signatures;
4) What’s your p(Pause)?
londonfuturists.com/2025/12/06/s...

Tristan Harris is one of the world's best communicators about the huge societal risks of hurried adoption of AI. This 2 hrs 22 mins interview features zinger after zinger of "OMG" phrases. His humanitarian impulses shine strongly. I thoroughly recommend it www.youtube.com/watch?v=BFU1...
AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! - Tristan Harris
YouTube video by The Diary Of A CEO
www.youtube.com

Given the winner-take-all stakes, we cannot expect AI companies to become more safety-minded and more collaborative as their products come closer to AGI. (That expectation would be another example of the dangerous claim that greater intelligence automatically leads to greater collaboration.)

*) Why many in the overall AI safety community seem to carry personal motivational conflicts;
*) Why the community was unnecessarily slow to move from researching technical safety to adopting public advocacy;
*) Lessons from the venerable book "Crossing the Chasm".

Points covered in the conversation include:
*) Holly's personal journey from studying wild animal welfare to studying the behaviour of wild AI development companies;
*) Special complications of possible digital sentience;
*) Why the Pause campaign makes sense even if your p(Pause) is as low as 20%;