Tom Ringstrom
banner
noreward4u.bsky.social
Tom Ringstrom
@noreward4u.bsky.social
Reward-Free Model-based Maximalist. High-dimensional Empowerment. Self-Preserving Autonomous Agents. Theories of intelligence grounded in compositional control.
What's the quote?
August 19, 2025 at 5:52 PM
You should always give in to these impulses, IMO.
August 19, 2025 at 5:42 PM
Looks cool! Heads up, my collaborators and I derived the state-action Linearly Solvable MDP a while back. You might be interested arxiv.org/pdf/2007.02527
arxiv.org
May 25, 2025 at 3:51 PM
Ringstrom_Thesis_v4.pdf
drive.google.com
March 24, 2025 at 9:17 PM
Stoffel lives at an animal rehabilitation center near Kruger National Park and is an expert escape artist. But he is 26 now (they only live an average of 8 years in the wild) so he spends most days snuggling with his girlfriend Hammie. BBC show: m.youtube.com/watch?v=c36U...
Stoffel, the honey badger that can escape from anywhere! - BBC
YouTube video by BBC
m.youtube.com
March 24, 2025 at 9:17 PM
And I've always wondered how this works with his Constructor theory.
March 14, 2025 at 7:26 PM
By the way, while Deutsch doesn't have a deeply rigorous decision theory to match his views, I did once hear him say (on a podcast I can't seem to find) that he regards value as equivalent to the space of possible transformations one can make, which is to a close approximation what empowerment is.
March 14, 2025 at 7:23 PM
Deutsch's emphasis on universal explainers is a better (though incomplete) alternative and has nothing to do with emulation (he never talks about the normative part of why one should want to explain something)
March 13, 2025 at 12:13 PM
Yeah, their emphasis on emulation is frustrating (for somewhat similar reasons to the recent Jaeger/Vervaeke paper). AI-by-learning being intractable is not interesting. It doesn't imply anything about the intractability of creating generally intelligent systems.
March 13, 2025 at 12:13 PM
IMO, A problem with RL is that, in sparse-reward problems, value functions don’t have a general decomposition over high-dimensional transition kernels so people are trying to learn neural-net approximations to difficult-to-generalize functions from a lot of experience.

Fun ep.
E61: Neurips 2024 RL meetup Hot takes: "What sucks about RL?"
What do RL researchers complain about after hours at the bar?  In this "Hot takes" episode, we find out!  
Recorded at The Pearl in downtown Vancouver, during the RL meetup after a day of Neurips 2024.
January 6, 2025 at 5:54 PM
Exactly :)
December 17, 2024 at 4:36 PM
@denizrudin.bsky.social Deniz, did your grandma and grandpa call you baby Rudin?
December 8, 2024 at 2:47 PM
Love that Fog Lake song.
December 7, 2024 at 3:47 PM
Reposted by Tom Ringstrom
I wish we could turn some of the starter-packs into a custom feed rather than following everyone.
November 25, 2024 at 5:37 PM
@bsky.app Please consider this!
November 25, 2024 at 5:45 PM