Daniel Eth (yes, Eth is my actual last name)
banner
daniel-eth.bsky.social
Daniel Eth (yes, Eth is my actual last name)
@daniel-eth.bsky.social
AI alignment & memes | "known for his humorous and insightful tweets" - Bing/GPT-4 | prev: @FHIOxford
If you jog in these sneakers it’s called a training run
November 15, 2025 at 7:15 PM
POV: You’re Marc Andreessen
November 9, 2025 at 9:33 PM
Andreessen really doubling down on mocking Catholics
November 9, 2025 at 4:11 AM
Andreessen is so dogmatically against working on decreasing risks from AI that he’s now mocking the pope for saying tech innovation “carries an ethical and spiritual weight” and that AI builders should “cultivate moral discernment as a fundamental part of their work”
November 9, 2025 at 2:38 AM
GPT-5 didn’t live up to OpenAI’s hype, but it is *exactly* in line with extrapolations from prior AI advancements. Go ahead and discount future statements from OpenAI/Altman, but you should still expect the fast AI progress that we’ve been seeing to continue
August 9, 2025 at 4:16 AM
Incremental improvement along the same exponential trend (compare GPT-5 to o3):
August 8, 2025 at 8:53 PM
New SOTA results from Opus
August 6, 2025 at 9:01 PM
Looks like from a mental health perspective, you want to make sure to do at least ~2hr/wk of light exercise (eg jogging) or ~1hr/wk of vigorous exercise, and there’s not much benefit to going beyond that.
August 3, 2025 at 5:41 AM
Woah Zuckerberg has a $300M estate in Hawaii?! That’s like 1/3 the cost of an AI researcher!
July 25, 2025 at 2:29 AM
It’s been an entire 5 weeks - I think we need an update to this chart
May 26, 2025 at 8:28 PM
Surprised and pleased to see this - Dario (Anthropic CEO) hints that he’s against the proposed 10 year state-level AI regulation ban:
May 26, 2025 at 3:41 AM
On the first point - Epoch finds that in language models, pretraining algorithmic progress has been around half as impactful as compute scale up. Naively, if compute scale up stopped, progress would slow down by 3x. This is a decent amount, but not enough to say “2030 or bust”
May 25, 2025 at 9:36 PM
Guy on the right is sad he can no longer use a computer, wishes he had spent more time working on his b2b SaaS startup when he was younger
May 24, 2025 at 11:41 PM
Every now and then I think of this quote from AI risk skeptic Yann LeCun
May 23, 2025 at 7:27 AM
I feel like this is an unreasonable expectation for the product
May 21, 2025 at 7:34 PM
Incredible prediction market
April 29, 2025 at 8:56 PM
One thing a lot of people missed from AI 2027: Yes, the pause was necessary for surviving. But it only worked because it was ~perfectly timed - after ~AGI but before ASI. If it had instead been earlier, America would have ceded its lead for nothing and we would have all died.
April 19, 2025 at 3:05 AM
Having looked into evidence about relevant dynamics, as well as the main arguments against a software intelligence explosion, we think this one could go either way.
March 27, 2025 at 11:24 PM
COUNTERARGUMENT 2: Progress will be slowed by the time required to train the next generation of AI systems. If each major advance requires a multi-month training process, then you can’t fit arbitrary advances within several months.
March 27, 2025 at 11:23 PM
COUNTERARGUMENT: Maybe fast software progress depends on rapidly growing hardware for running AI experiments? In this case, hardware could still be a bottleneck, preventing a software intelligence explosion.
March 27, 2025 at 11:23 PM
And if there is a software intelligence explosion, we could wind up with radically superhuman AI systems within months of ~fully automating AI R&D.
March 27, 2025 at 11:21 PM
OTOH, if the AI-improving-AI feedback loop overpowers diminishing returns, then AI progress continually accelerates, even if hardware is held constant. We dub this a “software intelligence explosion.”
March 27, 2025 at 11:21 PM
Diminishing returns to AI advancements may be steep – once the low-hanging fruit is picked, further improvements may be much harder. If diminishing returns are sufficiently steep, there won’t be an intelligence explosion.
March 27, 2025 at 11:20 PM
Once AI R&D is automated, will there be an intelligence explosion?

This is determined by two opposing forces:
1) Positive feedback loop of AI improving AI
2) Diminishing returns to AI R&D

(To be conservative, our analysis assumes that computing hardware is held fixed.)
March 27, 2025 at 11:19 PM
If AI R&D is fully automated, there will be a positive feedback loop: AI performs AI R&D -> AI progress  -> better AI does AI R&D -> etc. 

Empirical evidence suggests this feedback loop could cause an intelligence explosion despite diminishing returns.
March 27, 2025 at 11:15 PM