Teemu Sarapisto
tsarpf.bsky.social
Teemu Sarapisto
@tsarpf.bsky.social
CS/ML PhD research in high-dimensional time-series @helsinki.fi

Before: 7y of C++/JS/VR/AR/ML at Varjo, Yle, Reaktor, Automattic ...

After dark: synthesizers & 3D gfx
If we train LLMs to use browsers, I doubt it makes them 10x more useful, because they are dumb af 😄

My guess is that most progress in "agents" will be driven more by human developed LLM-friendly APIs rather than improvement of generalization capabilities in LLMs. No exponential speed-ups there.
April 8, 2025 at 9:50 AM
Good point, a sloppy reply from me.

I just meant that language is not the end-all tool for everything. Sure, LLMs can be trained to use tools like calculators and browsers like us. But so far we need to develop those tools, and train the LLMs to use them.
April 8, 2025 at 9:45 AM
What I expect is that scenarios that are particularly economically valuable will get neat automated solutions. |

Either via 1000 people annotating data for a year, or a bunch of scientists coming up with neat self-supervised losses for it 😆
April 4, 2025 at 12:52 PM
It assumes algo dev to maintain the exponential progress.

IMO the (multimodal) LLM paradigm to handle everything in a single model will not scale. Language is a bad abstraction for 1) math (LLMs can't multiply) 2) physical things (where is my cleaning robot?)

End of the sigmoid for data/compute.
April 4, 2025 at 12:50 PM
Nice to find you here then!

That'll be a difficult read, having limited background in dynamics/control/RL, but it's on the TODO.

Coming from ML, Neural ODEs got me hooked on dynamics and state spaces. Also variational math x optimal control is 🔥

Now learning basics from the book by Brunton/Kutz.
March 30, 2025 at 4:55 PM
www.foundationalpapersincomplexityscience.org/tables-of-co...

has a nice overview of the papers.

For example
- The 1943 perceptron paper (neural nets)
- Landauer's principle (reversible computing)
- Info theory (Shannon's og paper)
- State space models (Kalman)
...Turing's AI, Nash equilibrium...
www.foundationalpapersincomplexityscience.org
March 28, 2025 at 10:22 PM
I guess one could call this moving the goalpost so far that nothing will ever suffice 😁
March 27, 2025 at 10:14 AM
"If intelligence lies in the process of acquiring new skills, there is no task X that skill at X demonstrates intelligence"
March 27, 2025 at 10:12 AM
Ok, heh, well, partially the reason for the \infty in there is because the simulator got stuck in a non-chaotic loop due to integration errors. Grabbing the samples before the looping just gives a more uniform distribution
March 20, 2025 at 6:46 PM
1) In something like the 37th layer the model is (weighted) summing vectors which already have been combined with every other vector in the input sequence 36 times, + the effect of residual connections and multiple heads.
2) The tokens are (usually) not even full words to begin with. 2/2
March 10, 2025 at 10:24 PM
Great visualizations, and excellent explanation of KV cache. But their intuitive reasoning about attention adding the meaning(s) of words to others is quite misleading. 1/2
March 10, 2025 at 10:23 PM
Yeah, it's a bit too silent here, and the recommendation algorithm on bsky is not working great. The amount of clicking "show less like this" I'm doing is stupid.

Meanwhile, every time I check X I find a ton of interesting stuff, unfortunately also mixed with a lot of toxic bullshit as well.
February 5, 2025 at 1:08 PM
What are you referring to? I've missed this.
February 5, 2025 at 1:04 PM
Request and personal opinion: I would prefer if you focused less on the latest hype the AI swindlers are pushing out.

You have had unique angles for the physics stuff. While anyone with a brain can see, that even though OpenAI does very cool research, they are over-hyping every single release.
January 29, 2025 at 9:38 AM
And oh yeah, nice visualization! I really liked being able to compare the ELBO and log(z).
December 5, 2024 at 1:56 PM
I've taken one course in Bayesian ML, so I barely know the basics 😄

But somehow the fact that there is no consistency/identifiability guarantees even with infinite data makes me afraid of VI 😅

2/2
December 5, 2024 at 1:55 PM
IRL we don't know the shape of the true posterior (or log Z). In practice, when can you believe in the approximation enough to "dare" estimate uncertainty?

In practice, would you, e.g., try adding GMM components to boost ELBO? You’d need to keep everything else fixed for comparability, right?

1/2
December 5, 2024 at 1:54 PM
For the past 2 years, every time I've tried to use jax-metal it has either refused to work at all due to features being unimplemented, or provided wrong results in a very simple test scenario. So I just use the CPU version on my M2...
December 5, 2024 at 1:28 PM