Arvind Nagaraj
banner
narvind.bsky.social
Arvind Nagaraj
@narvind.bsky.social
Deep Learning | ML research |
Ex.Robotics at Invento | 🔗 https://narvind2003.github.io

Here to strictly talk about ML, NNs and related ideas. Casual stuff on x.com/nagaraj_arvind
It's a story about why QKV is magic, my love for the loop, and why HRM might be the blueprint for the next generation of AI reasoning.
My post, written with the help of an LLM (the irony!), is here. I poured my heart into this one:
medium.com/@gedanken.th...

#AI #DeepLearning #RNN #Transformer #HRM
The Loop is Back: Why HRM is the Most Exciting AI Architecture in Years
Years ago, I sat in Jeremy Howard’s FastAI class, right at the dawn of a new era. He was teaching us ULMFiT, a method he (& Sebastian…
medium.com
August 7, 2025 at 8:50 AM
The Hierarchical Reasoning Model (HRM) isn't just another model. It's a deep synthesis. It marries the iterative soul of an RNN (minus the BPTT nightmare) with the raw power of modern Attention.
I wrote a deep dive on why this is a full-circle moment for me, going back to the RNN finetuning days.
August 7, 2025 at 8:50 AM
What makes HRM truly special is its ability to "think fast and slow."Its ACT module isn't just a stop signal; it's a cognitive engine that learns to allocate effort.
It's the closest we've come yet to embodying Prof. Kahneman's vision of a System 1/2 mind in code.
August 7, 2025 at 8:50 AM
But how does it fix mistakes buried deep in the past? By not letting them stay in the past.
Each new "Thinking Session" (the M-loop) starts with the flawed result of the last one. It forces the model to confront its own errors until the logic is perfect.
August 7, 2025 at 8:50 AM
So how does HRM work? Imagine a tiny,2-person company.
🧠 A strategic CEO (H-module) who thinks slow, sees the big picture, and sets the overall strategy.
⚡️ A diligent Worker (L-module) who thinks fast, executing the details of the CEO's plan.
This separation allows for truly deep, iterative thought.
August 7, 2025 at 8:50 AM
The Hierarchical Reasoning Model (HRM) isn't just another model. It's a deep synthesis. It marries the iterative soul of an RNN (minus the BPTT nightmare) with the raw power of modern Attention.
August 7, 2025 at 8:50 AM
Then, last month, a paper dropped that changes everything.
This is the architecture I've been waiting for since 2018. A thread on HRM. 🧵
August 7, 2025 at 8:50 AM
You're supposed to what? Swallow the toothpaste?
March 30, 2025 at 4:52 AM

Taking a time machine within a time machine... stealing someone's consciousness...the ideas were next level!
The guy is a beast.
It's a shame Shane Carruth couldn't carry on making more amazing films.
December 7, 2024 at 8:01 PM
Yooo...a primer fan?
There are so many incredible moments in this film.
Wow...have you seen 'Upstream color' as well?
December 7, 2024 at 7:57 PM
Wow!
I should read this!
December 5, 2024 at 3:11 PM
Ah...
December 3, 2024 at 6:22 PM
What does "fuch" mean?
December 3, 2024 at 2:56 PM
Diffusion transformer (DiT) ftw!!
December 3, 2024 at 8:32 AM
6. V is not rotated. Only Q and K are rotated relative to each other. Farther tokens now have a larger angle between them.
7. The encoding signal is not going to die out. It can be preserved by doing it as part of the softmax dot product attn.
8. What a gorgeous 😍 idea...
December 3, 2024 at 6:32 AM
4. RoPE takes this operation from the beginning of the input to inside the attention operation itself.
5. There are 2 benefits: the semantic meaning of the token is not corrupted. We only rotate the vector, preserving the magnitude.
December 3, 2024 at 6:32 AM
TL;DR:
1. We need a way to encode token positions when feeding them as input into the transformer
2. We could just concat 1,2,3 etc. but this doesn't scale for variable lengths
3. Noam Shazeer showed show sin and cos waves can produce a beautiful pattern that encodes relative positions bw tokens.
December 3, 2024 at 6:32 AM
fleetwood.dev
fleetwood.dev
fleetwood.dev
December 3, 2024 at 6:32 AM