Avery HW Ryoo
@averyryoo.bsky.social
i like generative models, science, and Toronto sports teams
phd @ mila/udem, prev. @ uwaterloo
averyryoo.github.io 🇨🇦🇰🇷
phd @ mila/udem, prev. @ uwaterloo
averyryoo.github.io 🇨🇦🇰🇷
Pinned
Avery HW Ryoo
@averyryoo.bsky.social
· Jun 6
New preprint! 🧠🤖
How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
🧵1/7
How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
🧵1/7
Reposted by Avery HW Ryoo
🚨 New preprint alert!
🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈
A 🧵:
tinyurl.com/yr8tawj3
🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈
A 🧵:
tinyurl.com/yr8tawj3
The curriculum effect in visual learning: the role of readout dimensionality
Generalization of visual perceptual learning (VPL) to unseen conditions varies across tasks. Previous work suggests that training curriculum may be integral to generalization, yet a theoretical explan...
tinyurl.com
September 30, 2025 at 2:26 PM
🚨 New preprint alert!
🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈
A 🧵:
tinyurl.com/yr8tawj3
🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈
A 🧵:
tinyurl.com/yr8tawj3
Reposted by Avery HW Ryoo
Excited to share that POSSM has been accepted to #NeurIPS2025! See you in San Diego 🏖️
New preprint! 🧠🤖
How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
🧵1/7
How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
🧵1/7
September 20, 2025 at 3:40 PM
Excited to share that POSSM has been accepted to #NeurIPS2025! See you in San Diego 🏖️
Reposted by Avery HW Ryoo
I'm very excited to announce the publication of our new book Neural Interfaces, published by Elsevier. The book is a comprehensive resource for all those interested and gravitating around neural interfaces and brain-computer interfaces (BCIs).
shop.elsevier.com/books/neural...
shop.elsevier.com/books/neural...
Neural Interfaces
Neural Interfaces is a comprehensive book on the foundations, major breakthroughs, and most promising future developments of neural interfaces. The bo
shop.elsevier.com
August 19, 2025 at 8:18 PM
I'm very excited to announce the publication of our new book Neural Interfaces, published by Elsevier. The book is a comprehensive resource for all those interested and gravitating around neural interfaces and brain-computer interfaces (BCIs).
shop.elsevier.com/books/neural...
shop.elsevier.com/books/neural...
Reposted by Avery HW Ryoo
Step 1: Understand how scaling improves LLMs.
Step 2: Directly target underlying mechanism.
Step 3: Improve LLMs independent of scale. Profit.
In our ACL 2025 paper we look at Step 1 in terms of training dynamics.
Project: mirandrom.github.io/zsl
Paper: arxiv.org/pdf/2506.05447
Step 2: Directly target underlying mechanism.
Step 3: Improve LLMs independent of scale. Profit.
In our ACL 2025 paper we look at Step 1 in terms of training dynamics.
Project: mirandrom.github.io/zsl
Paper: arxiv.org/pdf/2506.05447
July 12, 2025 at 6:44 PM
Step 1: Understand how scaling improves LLMs.
Step 2: Directly target underlying mechanism.
Step 3: Improve LLMs independent of scale. Profit.
In our ACL 2025 paper we look at Step 1 in terms of training dynamics.
Project: mirandrom.github.io/zsl
Paper: arxiv.org/pdf/2506.05447
Step 2: Directly target underlying mechanism.
Step 3: Improve LLMs independent of scale. Profit.
In our ACL 2025 paper we look at Step 1 in terms of training dynamics.
Project: mirandrom.github.io/zsl
Paper: arxiv.org/pdf/2506.05447
Reposted by Avery HW Ryoo
(1/n)🚨Train a model solving DFT for any geometry with almost no training data
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
📜 arxiv.org/abs/2506.01225
💻 github.com/majhas/self-...
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
📜 arxiv.org/abs/2506.01225
💻 github.com/majhas/self-...
June 10, 2025 at 7:49 PM
(1/n)🚨Train a model solving DFT for any geometry with almost no training data
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
📜 arxiv.org/abs/2506.01225
💻 github.com/majhas/self-...
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
📜 arxiv.org/abs/2506.01225
💻 github.com/majhas/self-...
Reposted by Avery HW Ryoo
Preprint Alert 🚀
Multi-agent reinforcement learning (MARL) often assumes that agents know when other agents cooperate with them. But for humans, this isn’t always the case. For example, plains indigenous groups used to leave resources for others to use at effigies called Manitokan.
1/8
Multi-agent reinforcement learning (MARL) often assumes that agents know when other agents cooperate with them. But for humans, this isn’t always the case. For example, plains indigenous groups used to leave resources for others to use at effigies called Manitokan.
1/8
June 5, 2025 at 3:32 PM
Preprint Alert 🚀
Multi-agent reinforcement learning (MARL) often assumes that agents know when other agents cooperate with them. But for humans, this isn’t always the case. For example, plains indigenous groups used to leave resources for others to use at effigies called Manitokan.
1/8
Multi-agent reinforcement learning (MARL) often assumes that agents know when other agents cooperate with them. But for humans, this isn’t always the case. For example, plains indigenous groups used to leave resources for others to use at effigies called Manitokan.
1/8
New preprint! 🧠🤖
How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
🧵1/7
How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
🧵1/7
June 6, 2025 at 5:40 PM
New preprint! 🧠🤖
How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
🧵1/7
How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
🧵1/7
Reposted by Avery HW Ryoo
I am joining @ualberta.bsky.social as a faculty member and
@amiithinks.bsky.social!
My research group is recruiting MSc and PhD students at the University of Alberta in Canada. Research topics include generative modeling, representation learning, interpretability, inverse problems, and neuroAI.
@amiithinks.bsky.social!
My research group is recruiting MSc and PhD students at the University of Alberta in Canada. Research topics include generative modeling, representation learning, interpretability, inverse problems, and neuroAI.
May 29, 2025 at 6:53 PM
I am joining @ualberta.bsky.social as a faculty member and
@amiithinks.bsky.social!
My research group is recruiting MSc and PhD students at the University of Alberta in Canada. Research topics include generative modeling, representation learning, interpretability, inverse problems, and neuroAI.
@amiithinks.bsky.social!
My research group is recruiting MSc and PhD students at the University of Alberta in Canada. Research topics include generative modeling, representation learning, interpretability, inverse problems, and neuroAI.
Reposted by Avery HW Ryoo
Scaling models across multiple animals was a major step toward building neuro-foundation models; the next frontier is enabling multi-task decoding to expand the scope of training data we can leverage.
Excited to share our #ICLR2025 Spotlight paper introducing POYO+ 🧠
poyo-plus.github.io
🧵
Excited to share our #ICLR2025 Spotlight paper introducing POYO+ 🧠
poyo-plus.github.io
🧵
POYO+
POYO+: Multi-session, multi-task neural decoding from distinct cell-types and brain regions
poyo-plus.github.io
April 25, 2025 at 10:14 PM
Scaling models across multiple animals was a major step toward building neuro-foundation models; the next frontier is enabling multi-task decoding to expand the scope of training data we can leverage.
Excited to share our #ICLR2025 Spotlight paper introducing POYO+ 🧠
poyo-plus.github.io
🧵
Excited to share our #ICLR2025 Spotlight paper introducing POYO+ 🧠
poyo-plus.github.io
🧵
Reposted by Avery HW Ryoo
Interested in foundation models for #neuroscience? Want to contribute to the development of the next-generation of multi-modal models? Come join us at IVADO in Montreal!
We're hiring a full-time machine learning specialist for this work.
Please share widely!
#NeuroAI 🧠📈 🧪
We're hiring a full-time machine learning specialist for this work.
Please share widely!
#NeuroAI 🧠📈 🧪
🔍 [Job Offer] #MachineLearning Specialist.
Join the IVADO Research Regroupement - AI and Neuroscience (R1) to develop foundational models in the field of neuroscience.
More info: ivado.ca/2025/04/08/s...
#JobOffer #AI #Neuroscience #Research #MachineLearning
Join the IVADO Research Regroupement - AI and Neuroscience (R1) to develop foundational models in the field of neuroscience.
More info: ivado.ca/2025/04/08/s...
#JobOffer #AI #Neuroscience #Research #MachineLearning
Machine Learning Specialist | IVADO
ivado.ca
April 11, 2025 at 4:17 PM
Interested in foundation models for #neuroscience? Want to contribute to the development of the next-generation of multi-modal models? Come join us at IVADO in Montreal!
We're hiring a full-time machine learning specialist for this work.
Please share widely!
#NeuroAI 🧠📈 🧪
We're hiring a full-time machine learning specialist for this work.
Please share widely!
#NeuroAI 🧠📈 🧪
Reposted by Avery HW Ryoo
📽️Recordings from our
@cosynemeeting.bsky.social
#COSYNE2025 workshop on “Agent-Based Models in Neuroscience: Complex Planning, Embodiment, and Beyond" are now online: neuro-agent-models.github.io
🧠🤖
@cosynemeeting.bsky.social
#COSYNE2025 workshop on “Agent-Based Models in Neuroscience: Complex Planning, Embodiment, and Beyond" are now online: neuro-agent-models.github.io
🧠🤖
April 7, 2025 at 8:58 PM
📽️Recordings from our
@cosynemeeting.bsky.social
#COSYNE2025 workshop on “Agent-Based Models in Neuroscience: Complex Planning, Embodiment, and Beyond" are now online: neuro-agent-models.github.io
🧠🤖
@cosynemeeting.bsky.social
#COSYNE2025 workshop on “Agent-Based Models in Neuroscience: Complex Planning, Embodiment, and Beyond" are now online: neuro-agent-models.github.io
🧠🤖
Reposted by Avery HW Ryoo
Talk recordings from our COSYNE Workshop on Neuro-foundation Models 🌐🧠 are now up on the workshop website!
neurofm-workshop.github.io
neurofm-workshop.github.io
April 5, 2025 at 12:41 AM
Talk recordings from our COSYNE Workshop on Neuro-foundation Models 🌐🧠 are now up on the workshop website!
neurofm-workshop.github.io
neurofm-workshop.github.io
Very late, but had a 🔥 time at my first Cosyne presenting my work with @nandahkrishna.bsky.social, Ximeng Mao, @mattperich.bsky.social, and @glajoie.bsky.social on real-time neural decoding with hybrid SSMs. Keep an eye out for a preprint (hopefully) soon 👀
#Cosyne2025 @cosynemeeting.bsky.social
#Cosyne2025 @cosynemeeting.bsky.social
April 4, 2025 at 5:21 AM
Very late, but had a 🔥 time at my first Cosyne presenting my work with @nandahkrishna.bsky.social, Ximeng Mao, @mattperich.bsky.social, and @glajoie.bsky.social on real-time neural decoding with hybrid SSMs. Keep an eye out for a preprint (hopefully) soon 👀
#Cosyne2025 @cosynemeeting.bsky.social
#Cosyne2025 @cosynemeeting.bsky.social
Reposted by Avery HW Ryoo
Excited to be at #Cosyne2025 for the first time! I'll be presenting my poster [2-104] during the Friday session. E-poster here: www.world-wide.org/cosyne-25/se...
March 27, 2025 at 7:53 PM
Excited to be at #Cosyne2025 for the first time! I'll be presenting my poster [2-104] during the Friday session. E-poster here: www.world-wide.org/cosyne-25/se...
Reposted by Avery HW Ryoo
We'll be presenting two projects at #Cosyne2025, representing two main research directions in our lab:
🧠🤖 🧠📈
1/3
🧠🤖 🧠📈
1/3
March 27, 2025 at 7:13 PM
We'll be presenting two projects at #Cosyne2025, representing two main research directions in our lab:
🧠🤖 🧠📈
1/3
🧠🤖 🧠📈
1/3
Just a couple days until Cosyne - stop by [3-083] this Saturday and say hi! @nandahkrishna.bsky.social
March 24, 2025 at 6:19 PM
Just a couple days until Cosyne - stop by [3-083] this Saturday and say hi! @nandahkrishna.bsky.social
Reposted by Avery HW Ryoo
This will be a more difficult Cosyne than normal, due to both the travel restrictions for people coming from the US and the strike that may be happening at the hotel in Montreal.
But, we can still make this an awesome meeting as usual, y'all. Let's pull together and make it happen!
🧠📈
#Cosyne2025
But, we can still make this an awesome meeting as usual, y'all. Let's pull together and make it happen!
🧠📈
#Cosyne2025
March 23, 2025 at 9:26 PM
This will be a more difficult Cosyne than normal, due to both the travel restrictions for people coming from the US and the strike that may be happening at the hotel in Montreal.
But, we can still make this an awesome meeting as usual, y'all. Let's pull together and make it happen!
🧠📈
#Cosyne2025
But, we can still make this an awesome meeting as usual, y'all. Let's pull together and make it happen!
🧠📈
#Cosyne2025
Reposted by Avery HW Ryoo
Join us at #COSYNE2025 to explore recent advancements in large-scale training and analysis of brain data! 🧠🟦
We also made a starter pack with (most of) our speakers: go.bsky.app/Ss6RaEF
We also made a starter pack with (most of) our speakers: go.bsky.app/Ss6RaEF
March 10, 2025 at 9:21 PM
Join us at #COSYNE2025 to explore recent advancements in large-scale training and analysis of brain data! 🧠🟦
We also made a starter pack with (most of) our speakers: go.bsky.app/Ss6RaEF
We also made a starter pack with (most of) our speakers: go.bsky.app/Ss6RaEF
How can large-scale models + datasets revolutionize neuroscience 🧠🤖🌐? We are excited to announce our workshop: “Building a foundation model for the brain: datasets, theory, and models” at @cosynemeeting.bsky.social #COSYNE2025. Join us in Mont-Tremblant, Canada from March 31 – April 1!
March 10, 2025 at 7:55 PM
How can large-scale models + datasets revolutionize neuroscience 🧠🤖🌐? We are excited to announce our workshop: “Building a foundation model for the brain: datasets, theory, and models” at @cosynemeeting.bsky.social #COSYNE2025. Join us in Mont-Tremblant, Canada from March 31 – April 1!
Hi! Looking for an undergrad volunteer who's interested in working with SSMs + transformers for neural decoding/BCIs at Mila! Strong coding + Pytorch skills are a must. Please DM/email me your CV + interests (priority given to those based in Montréal). Thanks! 🧠🤖
January 31, 2025 at 10:16 PM
Hi! Looking for an undergrad volunteer who's interested in working with SSMs + transformers for neural decoding/BCIs at Mila! Strong coding + Pytorch skills are a must. Please DM/email me your CV + interests (priority given to those based in Montréal). Thanks! 🧠🤖
Really excited to read this paper - composing multiple diffusion models/EBMs is something I've been really interested in lately. I think there's potential in this direction for improving the controllability/interpretability of your generation process + mixing and matching pre-trained modules.
🧵(1/7) Have you ever wanted to combine different pre-trained diffusion models but don't have time or data to retrain a new, bigger model?
🚀 Introducing SuperDiff 🦹♀️ – a principled method for efficiently combining multiple pre-trained diffusion models solely during inference!
🚀 Introducing SuperDiff 🦹♀️ – a principled method for efficiently combining multiple pre-trained diffusion models solely during inference!
December 28, 2024 at 3:38 PM
Really excited to read this paper - composing multiple diffusion models/EBMs is something I've been really interested in lately. I think there's potential in this direction for improving the controllability/interpretability of your generation process + mixing and matching pre-trained modules.
Some exciting news in time for the holidays 🎄🎁☃️
I'll be at Cosyne 2025 (@cosynemeeting.bsky.social) to present our work on generalizable real-time decoding for BCIs 🧠🦾
Really looking forward to seeing everyone in Montréal 🇨🇦! Stay tuned for more details in the new year🤘
I'll be at Cosyne 2025 (@cosynemeeting.bsky.social) to present our work on generalizable real-time decoding for BCIs 🧠🦾
Really looking forward to seeing everyone in Montréal 🇨🇦! Stay tuned for more details in the new year🤘
December 25, 2024 at 3:37 AM
Some exciting news in time for the holidays 🎄🎁☃️
I'll be at Cosyne 2025 (@cosynemeeting.bsky.social) to present our work on generalizable real-time decoding for BCIs 🧠🦾
Really looking forward to seeing everyone in Montréal 🇨🇦! Stay tuned for more details in the new year🤘
I'll be at Cosyne 2025 (@cosynemeeting.bsky.social) to present our work on generalizable real-time decoding for BCIs 🧠🦾
Really looking forward to seeing everyone in Montréal 🇨🇦! Stay tuned for more details in the new year🤘
Reposted by Avery HW Ryoo
(whipping my around in a crowded party to figure out who just said 'Cocktail Party Effect')
December 20, 2024 at 6:16 PM
(whipping my around in a crowded party to figure out who just said 'Cocktail Party Effect')
Reposted by Avery HW Ryoo
Study computer science to avoid writing essays -> Need to write good grant essays to continue studying computer science
December 17, 2024 at 4:21 PM
Study computer science to avoid writing essays -> Need to write good grant essays to continue studying computer science
Reposted by Avery HW Ryoo
The slides of my NeurIPS lecture "From Diffusion Models to Schrödinger Bridges - Generative Modeling meets Optimal Transport" can be found here
drive.google.com/file/d/1eLa3...
drive.google.com/file/d/1eLa3...
BreimanLectureNeurIPS2024_Doucet.pdf
drive.google.com
December 15, 2024 at 6:40 PM
The slides of my NeurIPS lecture "From Diffusion Models to Schrödinger Bridges - Generative Modeling meets Optimal Transport" can be found here
drive.google.com/file/d/1eLa3...
drive.google.com/file/d/1eLa3...