Maxine 💃🏼
banner
maxine.science
Maxine 💃🏼
@maxine.science
🔬 Looking at the brain’s “dark matter”
🤯 Studying how minds change
👩🏼‍💻 Building science tools

🦋 ♾️ 👓
🌐 maxine.science
eh.
January 12, 2026 at 6:05 AM
@anisota.net some sweet moth stuff
January 12, 2026 at 5:52 AM
AI at present is a *huge* accelerant of this homogenizing force. And without care, market-wide this can squeeze out genuine craft.
January 11, 2026 at 5:29 PM
Is Ass Creed 14 more beautiful than System Shock? Most certainly—yes.

Is it as creative, innovative, and genuinely artistic? F*cking obviously no.

Is that because now to make a game you need a 9 figure budget and 15 different studios to make 400 standard-issue side quests? Absolutely.
January 11, 2026 at 5:25 PM
Really in direct parallel to the fears about AI, while it is true that indie teams are able to still make genuine and innovative art, the mainstream is near-entirely stripped of individuality and creativity, converging on a homogeneous formula.

Cf. film—everything now looks beautiful and the same.
January 11, 2026 at 5:17 PM
And yet at the same time, as an observation of someone who has been a gamer since the mid-90s, the essence of most mainstream games has been completely wrung dry, as the financial pressures and tooling have changed median user expectations toward a standard, cookie-cutter set of experiences. +
January 11, 2026 at 5:17 PM
“point noted.”

—9 year old me
January 11, 2026 at 6:03 AM
a propos of nothing, my dad served me Iberico ham without telling me when I was a vegetarian, and it was the most delicious thing I had ever tasted, and that made me stop being vegetarian.
January 11, 2026 at 6:02 AM
(The distinction of “solution to an unsolved problem with old math” versus “new math” is a subtle and intuitive one haha.)
January 11, 2026 at 5:56 AM
I would not at all be freaked if these have gotten to the point of really sophisticated attempts at putting existing pieces together and quickly verifying them. I would be very freaked if there’s new math in the proofs; I haven’t read Terry’s analysis of the text yet to see which these were haha.
January 11, 2026 at 5:55 AM
The question is whether there was any “new math” (term of art) in the proofs.

As far as “computers try a whole bunch of verifications”, mathematicians have used this to great effect for decades; the classification of finite simple groups and sphere-packing both required big computer checks iirc. +
January 11, 2026 at 5:55 AM
3. LLMs work because they do this, badly.
January 11, 2026 at 4:41 AM
2. The right generalization of learning algorithms lies in the choice of flows on internal activation geometry that converges to a topos equivalent to a target category (“the semantics of natural language”) up to a suitable class of weak equivalence adapted to the problem. +
January 11, 2026 at 4:41 AM
1. The internal logic of an LLM is dictated by the topos corresponding to the (highly nontrivial) topology of internal activations. +
January 11, 2026 at 4:41 AM
shhhhh the AI people aren’t ready for the theory they’ve been looking for. in due time.
January 11, 2026 at 4:22 AM
2. Also, the continued ascending infinite generalization doesn’t automatically follow—it really is the case that each step has to be independently evaluated, and in fact each of the expressed stages is extremely spiky even still. It’s equally lazy to be a scale bro.
January 9, 2026 at 7:41 PM
1. The 4.5 models really to me have been a big change after goofing around with CC on the 4 models and finding it ass. +
January 9, 2026 at 7:41 PM
So all of this is again just: given N samples, you have a fixed budget of Fisher information, and this only lets you know a set of parameters so well in total, given by the Cramér-Rao bound, which you see as proxy via a functional on the parameters (eval loss).
January 9, 2026 at 8:39 AM
I anticipate the FLOPs are just a proxy for effective number of samples given the inefficiencies of batched SGD, and you end up with an optimal choice on the bias-variance tradeoff at a fixed number of points once you fix a crossval scheme. +
January 9, 2026 at 8:39 AM
the multivariate case adds the wrinkles that make this quantitative in number of parameters also given some nice assumptions
January 9, 2026 at 8:33 AM
yeah my contention is this is trivial from classical statistics if you jiggle the problem around a little
January 9, 2026 at 8:31 AM
Medical student here!

Hypothesis: <10%.

Reasoning: How many times have you seen a doctor in your life? Are you dead? Scale.
January 8, 2026 at 11:51 PM