Hadi Vafaii
banner
hadivafaii.bsky.social
Hadi Vafaii
@hadivafaii.bsky.social
Postdoc at UC Berkeley, Redwood Center | 🧠🤖 | 🎹 | 🎾 | https://mysterioustune.com/
I'm excited to announce that we've added two more contenders to the RL Debate Series!

- Anne Collins (Professor @ucberkeleyofficial.bsky.social)
- Niels Leadholm (Research Manager @thousandbrains.org)

Things are heating up!!

See the flyer for details👇

🌎 sensorimotorai.github.io/debates/

🧠🤖🧠📈
October 3, 2025 at 6:19 AM
Not all 'active' learning is 'reinforcement' learning.

➡️ This suggests we need to upgrade—or even replace—RL with a more general theory of active learning that's not solely reward-based

✅Which is why we are launching the RL Debates

More info: sensorimotorai.github.io/debates/

🧵[4/5]

🧠🤖🧠📈
September 17, 2025 at 4:32 PM
For decades, reinforcement learning (RL) has been synonymous with "active learning".

This figure from Sutton & Barto's classic textbook summarizes the standard view👇

🤔 But what if the premise of an 'external, scalar reward' is a simplification we need to move past?

🧵[3/5]
September 17, 2025 at 4:32 PM
🌍 More info on our website: sensorimotorai.github.io/debates/
💬 Join us on Slack: join.slack.com/t/sensorimot...

💡 The main motivation: recognizing how central "action" is in all things intelligence

Perception, cognition, and knowledge are fundamentally intertwined with "action"

🧵[2/5]
September 17, 2025 at 4:32 PM
What drives behavior in living organisms? And how can we design artificial agents that learn interactively?

📢 To address these, the Sensorimotor AI Journal Club is launching the "RL Debate Series"👇

w/ @elisennesh.bsky.social, @noreward4u.bsky.social, @tommasosalvatori.bsky.social

🧵[1/5]

🧠🤖🧠📈
September 17, 2025 at 4:32 PM
"2000s Dad Rock" is a thing 🙂
August 31, 2025 at 12:02 AM
Our first 'Sensorimotor AI Journal Club' meeting was a blast!

(...which, we might as well just call the "AI Club for Non-conformists"👇😎)

📽️ full presentation: youtube.com/watch?v=efc7...

🧵[1/4]

🧠🤖🧠📈
August 10, 2025 at 12:50 AM
Announcing the new "Sensorimotor AI" Journal Club — please share/repost!

w/ Kaylene Stocking, Tommaso Salvatori, and @elisennesh.bsky.social

Sign up link: forms.gle/o5DXD4WMdhTg...

More details below 🧵[1/5]

🧠🤖🧠📈
July 9, 2025 at 10:31 PM
truly ahead of his time
July 1, 2025 at 5:30 AM
Stay tuned for Part 2, where I plan to:

1⃣ introduce Bayes' theorem, and provide a visual proof
2⃣ apply it to explain both optical illusions and social delusions

But first, you need to read Part 1 as a prerequisite 🙂 Here's the link again: mysterioustune.com/2025/06/29/p...

[5/6]🧵

🧠🤖🧠📈
June 30, 2025 at 5:30 AM
💡 Main insight: Perception ≠ Reality

🧠 Plato wrote about this back in 380 BC

✅ But now, we have the mathematics of Bayesian posterior inference to reason about this concept

If you’ve also wondered about this, you’re in good company. At least a 2400+ year old company 😉

[4/6]🧵
June 30, 2025 at 5:30 AM
3D visual perception is a standard way of motivating why we need inference to interpret the world.

💡But did you know John Locke first thought of this example way back in 1690?

Read my blog to learn about the rich history behind this idea.

[3/6]🧵
June 30, 2025 at 5:30 AM
“Perception as Inference” is a century-old idea that has inspired all major theories in neuroscience 🧠, including:

✅ Sparse Coding
✅ Predictive Coding
✅ Free Energy Principle
& more!

In my new blog post, I build the intuition behind this idea from ground up 👉[1/6]🧵

🧠🤖🧠📈
June 30, 2025 at 5:30 AM
✅When unrolled in time, the iP-VAE inference algorithm looks like a deep, stochastic, spiking ResNet with parameter sharing.

✅The exp nonlinearity and emergent divisive normalization likely underlie iP-VAE's effectiveness — similar to xLSTM (openreview.net/forum?id=ARA...)

🧵[11/n]
May 19, 2025 at 6:36 AM
iP-VAE is a spiking inference model, and it reproduces well-studied cortical response properties.

For example, contrast-dependent response latency of V1 neurons (Carandini et al., 1997):

(compare to Fig. 3A here: doi.org/10.1523/jneu...)

🧵[9/n]
May 19, 2025 at 6:36 AM
We did a massive hyperparameter sweep and found that:

✅iP-VAE and LCA (a classic sparse coding algorithm) find the best overall reconstruction-sparsity trade-off
✅All iterative VAEs outperform their standard amortized counterparts (despite using 25x fewer parameters)

🧵[8/n]
May 19, 2025 at 6:36 AM
We also compare to other models like a Gaussian version (closely related to predictive coding networks) etc.

✅All models converge beyond their training regime (T_train = 16, T_test = 1000)
✅iP-VAE finds the best compromise between reconstruction fidelity and sparsity

🧵[7/n]
May 19, 2025 at 6:36 AM
If you follow those prescriptions, you end up with this canonical circuit model that computational neuroscientists have been studying since 1970s.

Let me emphasize:

✅ We "derived" this. Just from F minimization. Without putting any of those terms in there by hand.

🧵[6/n]
May 19, 2025 at 6:36 AM
But free energy (F) minimization is not enough.

We need specific "prescriptions" to guide top-down algorithm development. Otherwise, we risk falling into this trap:

P.S. I recently learned this is called #Bayesplaining 🙂

🧵[4/n]
May 19, 2025 at 6:36 AM
This paper is half synthesis/review, half new algorithm design.

We start by recognizing the free energy (F) principle as our best candidate for a unified theory of biological and artificial intelligence.

Here's why👇

🧵[3/n]
May 19, 2025 at 6:36 AM
Elegant theoretical derivations are exclusive to physics. Right?? Wrong!

In a new preprint, we:
✅ "Derive" a spiking recurrent network from variational principles
✅ Show it does amazing things like out-of-distribution generalization
👉[1/n]🧵

w/ co-lead Dekel Galor & PI @jcbyts.bsky.social

🧠🤖🧠📈
May 19, 2025 at 6:36 AM
Thanks for sharing. I am also doing my best to spread the word 🙂 (see top left)

From: openreview.net/forum?id=ekt...
February 3, 2025 at 10:13 PM
I end with a call to action:

We are each gifted a finite amount of time on this earth.

How will you spend yours? Chasing fleeting distractions? Or contributing to humanity's deepest, longest ongoing quest—minimizing our collective KL divergence?

Your choice.

🧵[15/n]
🧠🤖🧠📈 #AI
January 14, 2025 at 7:53 AM
In the season finale (Part X), I will go all-in on speculation: p_brain is the all-encompassing object, swallowing everything. Even physics.

In that sense, all of science and philosophy merge into a single pursuit:

➡️ Humanity's collective KL minimization.

🧵[14/n]
January 14, 2025 at 7:53 AM
But hold on a second. We’re not done yet.

It turns out, I need 10 more parts to finish the full story arc. Here’s what I have in mind next:

🧵[13/n]
January 14, 2025 at 7:53 AM