Daniel Eth (yes, Eth is my actual last name)
banner
daniel-eth.bsky.social
Daniel Eth (yes, Eth is my actual last name)
@daniel-eth.bsky.social
AI alignment & memes | "known for his humorous and insightful tweets" - Bing/GPT-4 | prev: @FHIOxford
Good collaborative piece between the authors of AI 2027 and those of AI as Normal Technology on areas of shared agreement

t.co/h82vvrYRPM
https://asteriskmag.substack.com/p/common-ground-between-ai-2027-and
t.co
November 15, 2025 at 11:12 PM
If you jog in these sneakers it’s called a training run
November 15, 2025 at 7:15 PM
This whole Andreessen thing is a good reminder that you shouldn’t confuse vice with competence. Just because the guy is rude & subversive does not mean that he has intelligent things to say
November 12, 2025 at 7:17 PM
Feels like the dam has broken on people in the tech community airing grievances with Andreessen. Honestly makes me feel better about the direction of the tech community writ large
November 12, 2025 at 4:09 AM
You might think that while Marc Andreessen is a jerk, at least he's been far ahead of the curve on AI. If you think that, though, you'd be wrong – Andreessen was pretty slow to realize what was happening with AI, and many of his predictions on the topic since have been way off
November 10, 2025 at 11:39 AM
POV: You’re Marc Andreessen
November 9, 2025 at 9:33 PM
Hot take but it was only a matter of time before Andreessen would go from hating on EAs to hating on Catholics. You can’t mock the idea of *trying to be a good person* without getting into fights with most value systems
November 9, 2025 at 7:59 PM
The pope: “you should probably be a good person”
Marc Andreessen: “this is an attack on me and everything I stand for”
November 9, 2025 at 6:11 AM
Andreessen really doubling down on mocking Catholics
November 9, 2025 at 4:11 AM
Andreessen is so dogmatically against working on decreasing risks from AI that he’s now mocking the pope for saying tech innovation “carries an ethical and spiritual weight” and that AI builders should “cultivate moral discernment as a fundamental part of their work”
November 9, 2025 at 2:38 AM
Recently, major AI industry players (incl. a16z, Meta, & OpenAI’s Greg Brockman) announced >$100M in spending on pro-AI super PACs. This is an attempt to copy a wildly successful strategy from the crypto industry, to intimidate politicians away from pursuing AI regulations.🧵
September 16, 2025 at 11:10 PM
Kind feel like there were pretty similar steps in improvement for each of: GPT2 -> GPT3, GPT3 -> GPT4, and GPT4 -> GPT5. It’s just that most of the GPT4 -> GPT5 improvement was already realized by o3, and the step from there to GPT5 wasn’t that big
August 10, 2025 at 6:59 PM
GPT-5 didn’t live up to OpenAI’s hype, but it is *exactly* in line with extrapolations from prior AI advancements. Go ahead and discount future statements from OpenAI/Altman, but you should still expect the fast AI progress that we’ve been seeing to continue
August 9, 2025 at 4:16 AM
If you’ve been predicting AI was about to hit a wall for years… and then progress continues to be exponential… you can’t really claim a win here. Sure, people predicting it was super-exponential also can’t claim a win, but, uhhhh
August 8, 2025 at 8:18 PM
New SOTA results from Opus
August 6, 2025 at 9:01 PM
Kinda weird that the same people who argue the most mild of AI reporting or transparency requirements will “enable China to outcompete the US in AI” also argue restrictive GPU export controls will just drive China to invent highly efficient algorithms to compensate
August 5, 2025 at 8:59 PM
Agree with Garrison here. It’s probably the case that, if humanity sticks around for millennia, we’re ~destined to build AGI eventually. But there’s no reason humanity is sure to build it ~as soon as we can, and those saying so are trying to create a self-fulling prophecy
The biggest irony of this argument is that, if it's true, why bother making it? I think many of the inevitabilists are trying to create a self-fulfilling prophecy. E/accs say acceleration is an inevitable consequence of thermodynamics, but they also are terrified of AI regs 🤔
August 5, 2025 at 1:02 AM
How much of industry opposition to the Chip Security Act is b/c reqs are “burdensome” to implement… and how much is that semiconductor companies want to be able to maintain plausible deniability about where their chips end up, so they can keep indirectly selling to China?
August 3, 2025 at 11:29 PM
Looks like from a mental health perspective, you want to make sure to do at least ~2hr/wk of light exercise (eg jogging) or ~1hr/wk of vigorous exercise, and there’s not much benefit to going beyond that.
August 3, 2025 at 5:41 AM
Coal baron arguing that instead of giving everyone UBI like he had previously promised, “what if we just gave them each a free steam engine?”
July 27, 2025 at 9:57 PM
Three-sentence horror story:
Your car swerves off the road and crashes. You wake up, floating on a cloud, and think, “Is this heaven?” You hear, “it’s not just heaven—it’s a personalized utopia!”
July 25, 2025 at 7:45 PM
Woah Zuckerberg has a $300M estate in Hawaii?! That’s like 1/3 the cost of an AI researcher!
July 25, 2025 at 2:29 AM
It’s been an entire 5 weeks - I think we need an update to this chart
May 26, 2025 at 8:28 PM
Surprised and pleased to see this - Dario (Anthropic CEO) hints that he’s against the proposed 10 year state-level AI regulation ban:
May 26, 2025 at 3:41 AM
Many claim AGI timelines are 2030 or bust, b/c compute scaling will hit production limits in ~2030. These limits exist, but I think the point is overstated. First, algorithmic efficiency improvements have been large & can allow for effective scale up. Second, “unhobblings” could take multiple years.
May 25, 2025 at 9:35 PM