⚡️🌙
banner
dystopiabreaker.xyz
⚡️🌙
@dystopiabreaker.xyz
recovering cryptographer building ML models, doing systems work, security, etc.
claude loves telling me to go to sleep
February 12, 2026 at 11:05 PM
>company posts enormous growth and revenue numbers
>many investors eager to invest
>this is proof that the AI bubble is about to pop

fascinating things occur in the mind of mr. zitron
$30bn raised from a *telethon* of investors. This is insane. Never seen anything like it in my life. Anthropic is scraping every barrel dry, and guess what? This barely covers the $21 billion in orders it has with Broadcom, let alone the projected $12bn in training costs this year
February 12, 2026 at 9:37 PM
it's hard to think of concepts more broadly abused and misunderstood in recent years than gödel's theorems and searle's room
February 12, 2026 at 7:58 PM
yeah no i'll get right on that. let me just task my clawdbot. i mean my openclaw. they'll send beads to each other. and then the polecat in the gastown will pick it up. and then CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS will spin up on the gastown. and then the moltbook will communicate it
February 12, 2026 at 7:45 PM
all the best things in the python ecosystem in the last 5 years have come from rust devs being annoyed at it
February 12, 2026 at 7:40 PM
i will fix python
February 12, 2026 at 7:38 PM
not to be too mean but there is a real skill issue thing that happens here where even if we had arbitrarily capable agi, many people would still perceive it as being 'useless' because they have nothing interesting to task it with

this does not eliminate autonomy or misuse risk from the skilled
I understand why people are exhausted by AI hype, and why those of us squarely in the corner of "human dignity uber alles" see AI doomerism as self-serving hype, but I *really* think people on the left broadly need to start thinking seriously about the possibiltiy of the hype being...true.
February 12, 2026 at 4:51 AM
February 11, 2026 at 7:12 PM
waymos are great, this headline is wrong, it's weird and disturbing that so many people actively want to believe the technology that reduces car deaths is fake, and bluesky should make it so full-screen link previews don't show headlines
February 7, 2026 at 2:58 AM
it’s counter intuitive but maybe the reason why “just bash” works better is that it allows the model to extend its intent into the similarity / ranking metric, where as in traditional RAG/vector search, the metric is always cosine similarity regardless of intent
February 3, 2026 at 9:38 PM
compling and symbolics is the ptolemaic / lysenkoism of cogsci and compsci and im tired of pretending it's not
January 30, 2026 at 8:41 PM
that's just chomskyslop. you only hate it because it's uninterpretable
January 30, 2026 at 6:50 PM
it's often said that neural networks are 'nondeterministic', but that is actually not the case in the strict sense (i.e, input x -(always)> output y). a NN is a pure function of its weights. nondeterminism comes from implementation details (numerics, batching, kernel sequencing, etc).
January 30, 2026 at 6:44 PM
the 'stochastic parrot' thing was always conceptually wrong but it is also _empirically, mechanistically wrong_: transformer-circuits.pub/2025/introsp...
Emergent Introspective Awareness in Large Language Models
transformer-circuits.pub
January 30, 2026 at 6:29 PM
the only thing i will point out about e*ily m bender and that cluster is that the conflict is downstream of the usurpation of symbolic linguistics after decades of dominance, by deep learning, and everything that comes along with that
January 30, 2026 at 5:43 AM
in 2025 i saw a very sudden change in median infosec from “useless stochastic parrot bubble autocomplete lol” once agents started doing actual hacking work. you can’t really afford to be out of touch in that context
January 29, 2026 at 7:03 PM
Reposted by ⚡️🌙
A lot of people heard about book burnings as a kid, and internalized “books are sacred” rather than “the suppression of knowledge is evil”.
Some of y'all have never lived near a library and seen the dumpsters full of trashy romance book donations they didn't have space for and it shows. I like to think of books as sacred, as much as the next gal, but they're just paper. None of these were the last copy of a work.
Anthropic hired the former head of Google Books to oversee its secret "Project Panama," new court docs show — quietly buying millions of used books in bulk, breaking their spines and scanning them to feed into its Claude chatbot. wapo.st/4rjXAMQ
January 28, 2026 at 8:42 PM
Reposted by ⚡️🌙
January 28, 2026 at 8:25 PM
Reposted by ⚡️🌙
This has to be the goal post moving of the century.
Claude Code developed, trained, and deployed a classifier. A task that used to require an ML PhD ($1M/yr at Google, in the good old days) + a full stack team, and it would yield the same quality.

The world has changed, and people can't accept it.
the cope has risen to the level of “yeah but the custome platform and custom trained transformer model made by the journalist with zero programming experience classifies keysmashing as 50% oral!”
January 28, 2026 at 7:47 PM
i kind of regret being pseudonymous most of the time nowadays
January 28, 2026 at 7:30 PM
working on an audio version of this for making infinite evolving ambient soundscapes
here's another fun side project i did over a weekend: i trained a neural cellular automata to reproduce a famous painting (see: google-research.github.io/self-organis..., distill.pub/2020/growing...). this is a CA, using only local updates (3x3 learned conv), with a tiny convnet
January 28, 2026 at 7:24 PM
the cope has risen to the level of “yeah but the custome platform and custom trained transformer model made by the journalist with zero programming experience classifies keysmashing as 50% oral!”
January 28, 2026 at 7:02 PM
ed zitron enshittifies sense-making about real problems caused by ai
January 27, 2026 at 10:25 PM
January 27, 2026 at 8:39 PM
Reposted by ⚡️🌙
I don't think current AI is conscious or worthy of moral standing, but there's no reason to suppose it can't be. And it worries me that the exact people generally most concern with rights for other conscious beings are most likely to deny the possibility of machine consciousness.
January 26, 2026 at 1:22 PM