David Manheim
davidmanheim.alter.org.il
David Manheim
@davidmanheim.alter.org.il
Humanity's future can be amazing - let's make sure it is.

Visiting lecturer at the Technion, founder https://alter.org.il, Superforecaster, Pardee RAND graduate.
It's nice to see that #AAAI2026 is avoiding any religious discrimination; it's on Friday, Saturday, and Sunday, so it's equally hostile to Muslims, Jews, and Christians.
November 6, 2025 at 9:37 AM
Reposted by David Manheim
If you're among the 2000+ authors citing research evaluated by Unjournal.org, go to unjournal.pubpub.org to learn how commissioned experts rated & assessed that research.

Top 'citers of unjournal-evaluated research': https://bit.ly/3WRHc8W include Esther Duflo, Julian Jamison, & Berk Özler
October 22, 2025 at 4:16 PM
I'm here at #AIES2025, and still worry quite a lot about this.

The deep skepticism about AI systems ever being generally capable, or even human-level in specific domains, doesn't seem to have changed over the past few years.
October 22, 2025 at 11:32 AM
Excited to be here today at #AAAI #AIES2025. Looking forward to meeting more people and discussing governance and societal impacts of AI.
October 20, 2025 at 6:51 AM
New RAND report on an important (and messy) question: When should we actually worry about AI being used to design a pathogen? What’s plausible now vs. near-term vs. later?
(1/12)
I helped convene two expert Delphi panels in AI + Bio to weigh in.

Full report:
www.rand.org/pubs/researc...
October 5, 2025 at 3:45 PM
From the other site - not a complete explanation, but correct as far as it goes:

"the current configuration of economics/ wealth distribution is pretty solidly optimized to drive the wealthiest people in society batshit insane, which - to some extent - explains a lot of things you see around you"
September 7, 2025 at 9:29 AM
Reposted by David Manheim
🤖📰 Effective YESTERDAY: China has mandated a digital watermark for all AI-generated content.
www.cac.gov.cn/2025-03/14/c...
Translating in 🧵.
September 2, 2025 at 6:48 PM
Reposted by David Manheim
When a measure becomes a target, it ceases to be a good measure—Goodhart’s law seems clear enough.

But look closer and you see at least 4 variants, argue Manheim & Garrabrant: Regressional, Extremal, Causal and Adversarial, which offer deeper insight:

buff.ly/h9ORc76
September 2, 2025 at 4:35 PM
We're clearly hitting a wall - because if AI progress is really exponential, why aren't the numbers going up faster?

Checkmate, AI industry!
September 2, 2025 at 9:39 AM
Short post on why chip production location, and policy to influence it, matters less for AI than people believe it will.
forum.effectivealtruism.org/posts/7GHbwi...
Chip Production Policy Won’t Matter as Much as You'd Think — EA Forum
If timelines are short, it’s too late, and if they are long (and if we don't all die,) the way to win the "AI race" is to generate more benefit from AI, not control of chip production.
forum.effectivealtruism.org
August 31, 2025 at 7:02 PM
Yet another place where @garymarcus.bsky.social is right that LLMs don't have a correct world model.

(The filter exerts pressure on the water level. It doesn't understand that. But then, I'd bet most humans wouldn't realize this either.)
August 31, 2025 at 7:59 AM
Critics say the same thing about most current news reporting...
npr.org NPR @npr.org · Aug 28
Critics say that "slop" videos made with generative AI are often repetitive or useless. But they get millions of views — and platforms are grappling with what to do about them.
'AI slop' videos may be annoying, but they're racking up views — and ad money
Critics say that "slop" videos made with generative AI are often repetitive or useless. But they get millions of views — and platforms are grappling with what to do about them.
n.pr
August 31, 2025 at 7:25 AM
Reposted by David Manheim
Are They Starting To Take Our Jobs?
Are They Starting To Take Our Jobs?
thezvi.substack.com
August 27, 2025 at 6:51 PM
Not all technology is safe.

I wouldn't try to stop people from producing knives because they can be used for stabbings, but I would ask for mandated safety tests before cars can be sold, and rules about who can drive those cars, and law enforcement to enforce the laws.
August 28, 2025 at 6:40 AM
Reposted by David Manheim
LLM slop on arXiv is indeed a huge challenge for us. The trouble is that we don't think reviewers in conferences and journals are handling it well either, so we can't rely on them. We are trying many things to detect and remove slop.
August 28, 2025 at 4:28 AM
Reposted by David Manheim
Last year, the crypto industry set up a network of super PACs.

Its strategy? Throw huge amounts of money at politicians to discourage crypto regulation.

Now many of those same people have launched a $100m campaign to do the same thing for AI.

Read my full piece on Transformer:
AI embraces crypto’s dirty politics
A new super PAC network looks set to spend millions to influence AI regulation
www.transformernews.ai
August 27, 2025 at 8:58 AM
Reposted by David Manheim
text microblogging sites tend to punish behaviors people say they value (nuance, humility, vulnerability, accountability) and value behaviors people say they dislike (nastiness, irony-poisoning, scatalogical humor, etc)
August 24, 2025 at 9:53 AM
New blog post, with Ram Rahum, "A Conservative Vision For AI Alignment," presenting our ideas for what a (intellectually, politically, and socially) conservative view of AI alignment could look like.
A Conservative Vision For AI Alignment — LessWrong
Current plans for AI alignment (examples) come from a narrow, implicitly filtered, and often (intellectually, politically, and socially) liberal stan…
www.lesswrong.com
August 21, 2025 at 6:26 PM
The damage to the rule of law caused by public figures not being called to account for obvious corruption is not easy to measure, but it is huge.

Poisoning important social norms isn't a reversible action.
August 21, 2025 at 5:39 PM
Reposted by David Manheim
How can you know? It's like saying there must be some smallest item in a set - but sometimes there isn't!
It might be that the earliest use of the term is infinitely divisible because ther are infinite mentions in finite time, or the term goes back in time forever, I guess if there was no big bang?
I think it was actually, you just don't know.
August 20, 2025 at 11:08 AM
As long as academia allows and encourages narrow disciplines to be self-reinforcing and obscurantist, thus largely immune to outside scholarly input, I expect that the primary pushback against these narrow and often very ideologically-driven faculties will be funding cuts, not any academic reform.
I think part of the problem is that people are unhappy with narrow grievance studies departments, which is the current state of much of the humanities, but the incentives in higher education are all towards the fragmented bureaucracy heavy structure that enables or even requires that kind of split.
A massive attack on several humanities units (Arabic Studies, Judaic Studies, Holocaust Studies, Classics, Religion, German & Scandinavian, Russian/East European/Eurasian Studies) *and* tenure now unfolding at the University of Oregon (a blue state!). Closure of units and faculty layoffs threatened.
August 19, 2025 at 7:34 AM
Reposted by David Manheim
Wow look at all these anti-Semitic Israelis wow
Thousands of Israelis stayed home from work, flooded city streets and blocked roads and highways across the country on Sunday, staging some of the largest anti-war protests in months as the military prepares for a major assault on Gaza City. wapo.st/4mvMVfK
August 17, 2025 at 4:50 PM
The success of superforecasters, compared to both the general public and domain experts, is mostly not about overperformance by a special group. The skills and practices are unusual, not especially hard, nor intrinsically rare - and getting those basics right is a huge advantage.
August 14, 2025 at 5:27 PM
If done well, regulation and liability for current medical AI could reduce mistakes and mitigate ethical concerns.

But if your concern reliably leads to more people being dead because doctors aren't using new technology, you're doing ethics wrong!
August 14, 2025 at 10:00 AM
Reposted by David Manheim
The collapse of liberal Zionism needs to be studied.

As one of the main bulwarks against Israel's most horrific tendencies, the inability and refusal of so many in both America and Israel to stop Netanyahu's fantasies has helped this genocide to go unchecked.

Few will say it, but it's the reality.
August 13, 2025 at 6:54 PM