⚡️🌙
banner
dystopiabreaker.xyz
⚡️🌙
@dystopiabreaker.xyz
recovering cryptographer building ML models, doing systems work, security, etc.
November 12, 2025 at 2:47 AM
this really captures the specific set of cached/unconfronted assumptions that underlie a lot of discourse, especially here (from @andymasley.bsky.social)
November 7, 2025 at 1:16 PM
October 14, 2025 at 10:57 PM
gustav iii of sweden ass post
October 14, 2025 at 4:29 PM
Don’t Worry — It Can’t Happen

(also, the scientists who claim fission exists are just in the pocket of Big Science, and they’re literal nazis anyway, and also it’s just a stochastic reaction that peters out, and only physbros care about it, and it hasn’t ever happened before so it won’t)
October 14, 2025 at 1:03 AM
October 10, 2025 at 7:44 PM
come on man
October 8, 2025 at 10:56 PM
October 8, 2025 at 12:36 AM
if you're curious about the architecture and mechanics of LLMs, this site has a really excellent explorable interactive visualization. it helps build intuition for how massive these models are, what 'interpretability' means, and the complexity involved here

bbycroft.net/llm
October 6, 2025 at 5:51 PM
did you know that last year researchers finally completed a full digital connectome of the fruit fly?
October 6, 2025 at 5:48 PM
a while back some researchers made an interpretability framework that shows this “growth” (during unsupervised pretraining) quite beautifully in my opinion

arxiv.org/abs/2504.18274
October 5, 2025 at 5:35 PM
me posting on this website
October 5, 2025 at 3:37 PM
October 5, 2025 at 8:11 AM
October 5, 2025 at 7:36 AM
anyway, here is 2024 Nobel Prize in Physics winner Geoffrey Hinton discussing what we know about large AI models on 60 Minutes.
October 5, 2025 at 6:51 AM
the way it actually works is that we initialize 1 trillion random floating point numbers as activation weights and biases in an enormous multilayer artificial neural network, give it tasks, and use vector calculus to update those 1 trillion neurons based on empirical task performance
October 5, 2025 at 12:43 AM
the average person probably only knows about naive stochastic gradient descent and not newer optimizers like Adam and so on. they surely understand that deep learning systems are not explicitly programmed using human engineers writing human code.
October 4, 2025 at 3:15 PM
October 4, 2025 at 12:28 PM
artificial neuron jumpscare
October 4, 2025 at 6:51 AM
a stoichiometric model of language
October 4, 2025 at 6:14 AM
October 4, 2025 at 5:57 AM
from Russel & Norvig’s canonical text, if you’re actually curious about how and why the neuron metaphor is used, instead of moving the discussion to social relations to prove how you’re better than everyone else
October 3, 2025 at 9:33 PM
October 3, 2025 at 5:46 PM
does this make my sentence more obvious to you
October 3, 2025 at 1:43 PM
maybe this makes the latter more obvious
October 3, 2025 at 6:34 AM