DJ
eschaton-avoidance.bsky.social
DJ
@eschaton-avoidance.bsky.social
I was coding backprop from scratch to learn OCaml before any of you knew what an NN was, @ me directly
December 5, 2025 at 1:36 AM
This got me on multiple "anti-AI" lists, which, lmao
December 5, 2025 at 1:35 AM
Do you only defend trans people in the abstract? bc you're a dick to every one who talked to you the last 24 hours
December 2, 2025 at 1:36 AM
It me!
December 1, 2025 at 4:26 PM
Hebbian learning never produced good results relative to gradient descent yet is a better approximation for organic neurons. We know we are missing something big in neuroscience and nobody cares as much now that deep learning basically works
November 28, 2025 at 7:32 PM
That gradient descent is nonlocal and cannot be implemented in organic neurons was a huge topic in NNs pre the deep learning era, this is probably him remembering all of those conversations
November 28, 2025 at 7:31 PM
Given the path of EA in the last ten years I've become much more skeptical of the ability for mass charitable action to be done in a religion-free way
November 27, 2025 at 7:02 PM
This works great for grow light schedules from my experience! Never tried watering lines tho
November 27, 2025 at 6:48 PM
Reposted by DJ
I am generally (inaccurately) considered a hypist on this site because I do applied AI/ML stuff as a scientist. But I 100% agree. I _wish_ all of the “ocean-boiling lie machine” energy was focused on morally obliterating image and video gen, which have few legit (and zero compelling) use cases.
November 26, 2025 at 5:43 PM
"If you reframe the comic it's dumb"

*Someone reframes the reframing*

"You can't do that, you're doing psychology fails!"
November 21, 2025 at 3:34 PM
The original comic is making the argument that bombings happen a lot even when Ds are in charge, you're also making a counter argument that doesn't engage directly with the original.

And then being like "I bet she isn't even conscious she's not engaging with the original point"? Tu Quoque!
November 21, 2025 at 3:31 PM
That's fair, I might be overly pessimistic about the slop feedback loop since a lot of people in theory are working to fight it. Just doesn't feel from the outside like there's been much success on that front ig.
November 19, 2025 at 4:38 PM
I'm pretty sure the dynamic near-term is google getting worse (slop+answers never appearing due to increased LLM share) and LLMs fail to pick up all of the slack though. Does that ring false to you?
November 19, 2025 at 4:22 PM
You're right that academia could not train at GPT3 scale, but that was just 5 years ago. 15 years ago (pre-AlexNet), it was not clear that neural networks were the future. 10 years ago, compute was not the bottleneck, architecture was (until transformers). Broadly you're right but timeline a bit off
November 19, 2025 at 3:26 PM
Even just 15 years ago, prior to AlexNet, the major players in neural nets were mostly academic, not industry!
November 19, 2025 at 3:17 PM
I get what you're saying but I don't think it's true. The reason LLMs didn't arise sooner is because nobody in either academia or industry could get RNNs to work despite what at the time was large public investment including in public datasets
November 19, 2025 at 3:11 PM
the transformer was developed less than ten years ago and published publicly (with public pre-print iirc), in what world did who have a monopoly on the tech back then?
November 19, 2025 at 1:43 PM
Fodor coming in from behind as the least creepy syntax guy
November 19, 2025 at 6:00 AM
"say it ain't so" doesn't hold up to this art
November 19, 2025 at 5:58 AM
In their defense, the original timelines were 2027-2029, but they updated in response to critiques
November 16, 2025 at 9:13 PM
This was only in response to my (and others') critiques, they were originally 2027-2029 (www.lesswrong.com/posts/TpSFoq...)
AI 2027: What Superintelligence Looks Like — LessWrong
Comment by Peter Johnson - I took a look at the timeline model, and I unfortunately have to report some bad news... The model is almost entirely non-sensitive to what the current length of task an AI...
www.lesswrong.com
November 16, 2025 at 9:12 PM
Rouge the Bat as Biblical Job
Tags: Violence, Sexual Content, Trauma, Hurt/Comfort, Poly, Harm to Children, Praise Kink
November 15, 2025 at 10:35 PM