Avery HW Ryoo
banner
averyryoo.bsky.social
Avery HW Ryoo
@averyryoo.bsky.social
i like generative models, science, and Toronto sports teams

phd @ mila/udem, prev. @ uwaterloo

averyryoo.github.io 🇨🇦🇰🇷
Finally, we show POSSM's performance on speech decoding - a long context task that can quickly grow expensive for Transformers. In the unidirectional setting, POSSM beats the GRU baseline, achieving a phoneme error rate (PER) of 27.3 while having more robustness to variation in preprocessing.

🧵6/7
June 6, 2025 at 5:40 PM
Cross-species transfer! 🐵➡️🧑

Excitingly, we find that POSSM pretrained solely on monkey reaching data achieves SOTA performance when decoding imagined handwriting in human subjects! This shows the potential of leveraging NHP data to bootstrap human BCI decoding in low-data clinical settings.

🧵5/7
June 6, 2025 at 5:40 PM
By pretraining on 140 monkey reaching sessions, POSSM effectively transfers to new subjects and tasks, matching or outperforming several baselines (e.g., GRU, POYO, Mamba) across sessions.

✅ High R² across the board
✅ 9× faster inference than Transformers
✅ <5ms latency per prediction

🧵4/7
June 6, 2025 at 5:40 PM
POSSM combines the real-time inference of an RNN with the tokenization, pretraining, and finetuning abilities of a Transformer!

Using POYO-style tokenization, we encode spikes in 50ms windows and stream them to a recurrent model (e.g., Mamba, GRU) for fast, frequent predictions over time.

🧵3/7
June 6, 2025 at 5:40 PM
The problem with existing decoders?

😔 RNNs offer efficient, causal inference, but rely on rigid, binned input formats - limiting generalization to new neurons or sessions.

😔 Transformers enable generalization via tokenization, but have high computational costs due to the attention mechanism.

🧵2/7
June 6, 2025 at 5:40 PM
New preprint! 🧠🤖

How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?

We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!

🧵1/7
June 6, 2025 at 5:40 PM
Very late, but had a 🔥 time at my first Cosyne presenting my work with @nandahkrishna.bsky.social, Ximeng Mao, @mattperich.bsky.social, and @glajoie.bsky.social on real-time neural decoding with hybrid SSMs. Keep an eye out for a preprint (hopefully) soon 👀

#Cosyne2025 @cosynemeeting.bsky.social
April 4, 2025 at 5:21 AM
Just a couple days until Cosyne - stop by [3-083] this Saturday and say hi! @nandahkrishna.bsky.social
March 24, 2025 at 6:19 PM
How can large-scale models + datasets revolutionize neuroscience 🧠🤖🌐? We are excited to announce our workshop: “Building a foundation model for the brain: datasets, theory, and models” at @cosynemeeting.bsky.social #COSYNE2025. Join us in Mont-Tremblant, Canada from March 31 – April 1!
March 10, 2025 at 7:55 PM
sinthlab EoY social! I'm grateful everyday that I get to work with such a kind and intelligent group of individuals.

@mattperich.bsky.social @oliviercodol.bsky.social @anirudhgj.bsky.social
December 12, 2024 at 7:24 PM
Thrilled to share a new preprint exploring the spatial organization of multisensory convergence in the mouse isocortex! 🧠🎉 Even more special as it builds on work I started during my undergrad, a lifetime ago 👨‍🦳

Check it out here: www.biorxiv.org/content/10.1...
December 10, 2024 at 10:41 PM
Some proof of how much I enjoyed Bixis in the ~7 months I used them this year 😌🚴‍♂️
December 10, 2024 at 2:10 AM