Arna Ghosh
@arnaghosh.bsky.social
PhD student at Mila & McGill University, Vanier scholar • 🧠+🤖 grad student• Ex-RealityLabs, Meta AI • Believer in Bio-inspired AI • Comedy+Cricket enthusiast
Reposted by Arna Ghosh
Conrad Hal Waddington was born OTD in 1905.
His “epigenetic landscape” is a diagrammatic representation of the constraints influencing embryonic development.
On his 50th birthday, his colleagues gave him a pinball machine on the model of the epigenetic landscape.
🧪 🦫🦋 🌱🐋 #HistSTM #philsci #evobio
His “epigenetic landscape” is a diagrammatic representation of the constraints influencing embryonic development.
On his 50th birthday, his colleagues gave him a pinball machine on the model of the epigenetic landscape.
🧪 🦫🦋 🌱🐋 #HistSTM #philsci #evobio
November 8, 2025 at 4:03 PM
Reposted by Arna Ghosh
I’m looking for interns to join our lab for a project on foundation models in neuroscience.
Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...).
Interested? See the details in the comments. (1/3)
🧠🤖
Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...).
Interested? See the details in the comments. (1/3)
🧠🤖
AI and Neuroscience | IVADO
ivado.ca
November 7, 2025 at 1:52 PM
I’m looking for interns to join our lab for a project on foundation models in neuroscience.
Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...).
Interested? See the details in the comments. (1/3)
🧠🤖
Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...).
Interested? See the details in the comments. (1/3)
🧠🤖
Reposted by Arna Ghosh
A tad late (announcements coming) but very happy to share the latest developments in my previous preprint!
Previously, we show that neural representations for control of movement are largely distinct following supervised or reinforcement learning. The latter most closely matches NHP recordings.
Previously, we show that neural representations for control of movement are largely distinct following supervised or reinforcement learning. The latter most closely matches NHP recordings.
Here’s our latest work at @glajoie.bsky.social and @mattperich.bsky.social ‘s labs! Excited to see this out.
We used a combination of neural recordings & modelling to show that RL yields neural dynamics closer to biology, with useful continual learning properties.
www.biorxiv.org/content/10.1...
We used a combination of neural recordings & modelling to show that RL yields neural dynamics closer to biology, with useful continual learning properties.
www.biorxiv.org/content/10.1...
Brain-like neural dynamics for behavioral control develop through reinforcement learning
During development, neural circuits are shaped continuously as we learn to control our bodies. The ultimate goal of this process is to produce neural dynamics that enable the rich repertoire of behavi...
www.biorxiv.org
November 6, 2025 at 2:10 AM
A tad late (announcements coming) but very happy to share the latest developments in my previous preprint!
Previously, we show that neural representations for control of movement are largely distinct following supervised or reinforcement learning. The latter most closely matches NHP recordings.
Previously, we show that neural representations for control of movement are largely distinct following supervised or reinforcement learning. The latter most closely matches NHP recordings.
LLMs are trained to compress data by mapping sequences to high-dim representations!
How does the complexity of this mapping change across LLM training? How does it relate to the model’s capabilities? 🤔
Announcing our #NeurIPS2025 📄 that dives into this.
🧵below
#AIResearch #MachineLearning #LLM
How does the complexity of this mapping change across LLM training? How does it relate to the model’s capabilities? 🤔
Announcing our #NeurIPS2025 📄 that dives into this.
🧵below
#AIResearch #MachineLearning #LLM
October 31, 2025 at 4:19 PM
LLMs are trained to compress data by mapping sequences to high-dim representations!
How does the complexity of this mapping change across LLM training? How does it relate to the model’s capabilities? 🤔
Announcing our #NeurIPS2025 📄 that dives into this.
🧵below
#AIResearch #MachineLearning #LLM
How does the complexity of this mapping change across LLM training? How does it relate to the model’s capabilities? 🤔
Announcing our #NeurIPS2025 📄 that dives into this.
🧵below
#AIResearch #MachineLearning #LLM
Very cool study, with interesting insights about theta sequences and learning!
1/
🚨 New preprint! 🚨
Excited and proud (& a little nervous 😅) to share our latest work on the importance of #theta-timescale spiking during #locomotion in #learning. If you care about how organisms learn, buckle up. 🧵👇
📄 www.biorxiv.org/content/10.1...
💻 code + data 🔗 below 🤩
#neuroskyence
🚨 New preprint! 🚨
Excited and proud (& a little nervous 😅) to share our latest work on the importance of #theta-timescale spiking during #locomotion in #learning. If you care about how organisms learn, buckle up. 🧵👇
📄 www.biorxiv.org/content/10.1...
💻 code + data 🔗 below 🤩
#neuroskyence
September 19, 2025 at 7:07 AM
Very cool study, with interesting insights about theta sequences and learning!
Reposted by Arna Ghosh
Together with @repromancer.bsky.social, I have been musing for a while that the exponentiated gradient algorithm we've advocated for comp neuro would work well with low-precision ANNs.
This group got it working!
arxiv.org/abs/2506.17768
May be a great way to reduce AI energy use!!!
#MLSky 🧪
This group got it working!
arxiv.org/abs/2506.17768
May be a great way to reduce AI energy use!!!
#MLSky 🧪
Log-Normal Multiplicative Dynamics for Stable Low-Precision Training of Large Networks
Studies in neuroscience have shown that biological synapses follow a log-normal distribution whose transitioning can be explained by noisy multiplicative dynamics. Biological networks can function sta...
arxiv.org
July 9, 2025 at 2:34 PM
Together with @repromancer.bsky.social, I have been musing for a while that the exponentiated gradient algorithm we've advocated for comp neuro would work well with low-precision ANNs.
This group got it working!
arxiv.org/abs/2506.17768
May be a great way to reduce AI energy use!!!
#MLSky 🧪
This group got it working!
arxiv.org/abs/2506.17768
May be a great way to reduce AI energy use!!!
#MLSky 🧪
This looks like a very cool result! 😀
Can't wait to read in detail.
Can't wait to read in detail.
Does the brain learn by gradient descent?
It's a pleasure to share our paper at @cp-cell.bsky.social, showing how mice learning over long timescales display key hallmarks of gradient descent (GD).
The culmination of my PhD supervised by @laklab.bsky.social, @saxelab.bsky.social and Rafal Bogacz!
It's a pleasure to share our paper at @cp-cell.bsky.social, showing how mice learning over long timescales display key hallmarks of gradient descent (GD).
The culmination of my PhD supervised by @laklab.bsky.social, @saxelab.bsky.social and Rafal Bogacz!
Dopamine encodes deep network teaching signals for individual learning trajectories
Longitudinal tracking of long-term learning behavior and striatal dopamine reveals
that dopamine teaching signals shape individually diverse yet systematic learning
trajectories, captured mathematical...
www.cell.com
June 15, 2025 at 11:03 PM
This looks like a very cool result! 😀
Can't wait to read in detail.
Can't wait to read in detail.
Preprint Alert 🚀
Multi-agent reinforcement learning (MARL) often assumes that agents know when other agents cooperate with them. But for humans, this isn’t always the case. For example, plains indigenous groups used to leave resources for others to use at effigies called Manitokan.
1/8
Multi-agent reinforcement learning (MARL) often assumes that agents know when other agents cooperate with them. But for humans, this isn’t always the case. For example, plains indigenous groups used to leave resources for others to use at effigies called Manitokan.
1/8
June 9, 2025 at 6:16 AM
Reposted by Arna Ghosh
Research Scientist, Paradigms of Intelligence — Google Careers
www.google.com
May 8, 2025 at 9:23 PM
Also, big shoutout to @quentin-garrido.bsky.social+gang and
@aggieinca.bsky.social+gang for developing Rankme and Lidar, respectively.
Reptrix incorporates these representation quality metrics. 🚀
Let's make it easier to select good SSL/foundation models. 💪
@aggieinca.bsky.social+gang for developing Rankme and Lidar, respectively.
Reptrix incorporates these representation quality metrics. 🚀
Let's make it easier to select good SSL/foundation models. 💪
Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning
April 1, 2025 at 8:58 PM
Also, big shoutout to @quentin-garrido.bsky.social+gang and
@aggieinca.bsky.social+gang for developing Rankme and Lidar, respectively.
Reptrix incorporates these representation quality metrics. 🚀
Let's make it easier to select good SSL/foundation models. 💪
@aggieinca.bsky.social+gang for developing Rankme and Lidar, respectively.
Reptrix incorporates these representation quality metrics. 🚀
Let's make it easier to select good SSL/foundation models. 💪
Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning
April 1, 2025 at 6:24 PM
Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning
Super cool paper!
It formalizes a lot of ideas I have been mulling over the past year, and connects tons of historical ideas neatly.
Definitely worth a read if you are working/interested in mechanistic interp and neural representations.
It formalizes a lot of ideas I have been mulling over the past year, and connects tons of historical ideas neatly.
Definitely worth a read if you are working/interested in mechanistic interp and neural representations.
🔵 New paper! We explore sparse coding, superposition, and the Linear Representation Hypothesis (LRH) through identifiability theory, compressed sensing, and interpretability. If you’re curious about lifting neural reps out of superposition, this might interest you! 🤓
arxiv.org/abs/2503.01824
arxiv.org/abs/2503.01824
From superposition to sparse codes: interpretable representations in neural networks
Understanding how information is represented in neural networks is a fundamental challenge in both neuroscience and artificial intelligence. Despite their nonlinear architectures, recent evidence sugg...
arxiv.org
March 8, 2025 at 6:01 AM
Super cool paper!
It formalizes a lot of ideas I have been mulling over the past year, and connects tons of historical ideas neatly.
Definitely worth a read if you are working/interested in mechanistic interp and neural representations.
It formalizes a lot of ideas I have been mulling over the past year, and connects tons of historical ideas neatly.
Definitely worth a read if you are working/interested in mechanistic interp and neural representations.
Just over a week since I defended my 🤖+🧠PhD thesis, and the feeling is just sinking in. Extremely grateful to
@tyrellturing.bsky.social for supporting me through this amazing journey! 🙏
Big thanks to all members of the LiNC lab, and colleagues at mcgill University and @mila-quebec.bsky.social. ❤️😁
@tyrellturing.bsky.social for supporting me through this amazing journey! 🙏
Big thanks to all members of the LiNC lab, and colleagues at mcgill University and @mila-quebec.bsky.social. ❤️😁
February 10, 2025 at 12:36 PM
Just over a week since I defended my 🤖+🧠PhD thesis, and the feeling is just sinking in. Extremely grateful to
@tyrellturing.bsky.social for supporting me through this amazing journey! 🙏
Big thanks to all members of the LiNC lab, and colleagues at mcgill University and @mila-quebec.bsky.social. ❤️😁
@tyrellturing.bsky.social for supporting me through this amazing journey! 🙏
Big thanks to all members of the LiNC lab, and colleagues at mcgill University and @mila-quebec.bsky.social. ❤️😁
Reposted by Arna Ghosh
Just giving this a boost for those who may not have seen it yet... we have a PI position (molecular and cellular basis of cognition) at The Hospital for Sick Children (Toronto). The position comes with an appointment at Assist/Assoc Prof level at U of T. Share widely!
can-acn.org/scientist-se...
can-acn.org/scientist-se...
Scientist/Senior Scientist – Research Institute, Hospital for Sick Children, University of Toronto – Canadian Association for Neuroscience
can-acn.org
January 23, 2025 at 2:16 PM
Just giving this a boost for those who may not have seen it yet... we have a PI position (molecular and cellular basis of cognition) at The Hospital for Sick Children (Toronto). The position comes with an appointment at Assist/Assoc Prof level at U of T. Share widely!
can-acn.org/scientist-se...
can-acn.org/scientist-se...
Come say hi at the noon poster session today in the East hall, poster #2201. 🚀
#NeurIPS2024
#NeurIPS2024
The problem with current SSL? It's hungry. Very hungry. 🤖
Training time: Weeks
Dataset size: Millions of images
Compute costs: 💸💸💸
Our #NeurIPS2024 poster makes SSL pipelines 2x faster and achieves similar accuracy at 50% pretraining cost! 💪🏼✨
🧵 1/8
Training time: Weeks
Dataset size: Millions of images
Compute costs: 💸💸💸
Our #NeurIPS2024 poster makes SSL pipelines 2x faster and achieves similar accuracy at 50% pretraining cost! 💪🏼✨
🧵 1/8
December 13, 2024 at 6:47 PM
Come say hi at the noon poster session today in the East hall, poster #2201. 🚀
#NeurIPS2024
#NeurIPS2024
The problem with current SSL? It's hungry. Very hungry. 🤖
Training time: Weeks
Dataset size: Millions of images
Compute costs: 💸💸💸
Our #NeurIPS2024 poster makes SSL pipelines 2x faster and achieves similar accuracy at 50% pretraining cost! 💪🏼✨
🧵 1/8
Training time: Weeks
Dataset size: Millions of images
Compute costs: 💸💸💸
Our #NeurIPS2024 poster makes SSL pipelines 2x faster and achieves similar accuracy at 50% pretraining cost! 💪🏼✨
🧵 1/8
December 13, 2024 at 3:44 AM
The problem with current SSL? It's hungry. Very hungry. 🤖
Training time: Weeks
Dataset size: Millions of images
Compute costs: 💸💸💸
Our #NeurIPS2024 poster makes SSL pipelines 2x faster and achieves similar accuracy at 50% pretraining cost! 💪🏼✨
🧵 1/8
Training time: Weeks
Dataset size: Millions of images
Compute costs: 💸💸💸
Our #NeurIPS2024 poster makes SSL pipelines 2x faster and achieves similar accuracy at 50% pretraining cost! 💪🏼✨
🧵 1/8
Reposted by Arna Ghosh
Learning by reconstruction captures uninformative details in your data. This “attention to details” biases the ViT’s attention. Our solution: a new token aggregator->improves (significantly) MAE linear probe perf. and (slightly) JEPAs like I-JEPA
arxiv.org/abs/2412.03215
arxiv.org/abs/2412.03215
December 5, 2024 at 6:47 PM
Learning by reconstruction captures uninformative details in your data. This “attention to details” biases the ViT’s attention. Our solution: a new token aggregator->improves (significantly) MAE linear probe perf. and (slightly) JEPAs like I-JEPA
arxiv.org/abs/2412.03215
arxiv.org/abs/2412.03215
Reposted by Arna Ghosh
Delighted to be in Leeds joining the School of Computing! Fantastic first impressions — like a "less offensive London" (youtu.be/watch?v=_6_VVLgrgFI). Stay tuned for a PhD position starting next October. Meanwhile, drop me a message with your CV and research interests—I'd love to hear from you!
Pleased to welcome @repromancer.bsky.social - a computational neuroscientist - to the @universityofleeds.bsky.social today - working on the boundary of neuroscience and AI. Welcome Jonathan!
November 29, 2024 at 4:02 PM
Delighted to be in Leeds joining the School of Computing! Fantastic first impressions — like a "less offensive London" (youtu.be/watch?v=_6_VVLgrgFI). Stay tuned for a PhD position starting next October. Meanwhile, drop me a message with your CV and research interests—I'd love to hear from you!
What's your favourite way to merge two (or more) bibtex files?
I want to merge bibtex files from two different papers without having to manually get rid of repeated entries.
Even better if there's some browser/web tool that does it. :)
I want to merge bibtex files from two different papers without having to manually get rid of repeated entries.
Even better if there's some browser/web tool that does it. :)
November 10, 2024 at 9:03 PM
What's your favourite way to merge two (or more) bibtex files?
I want to merge bibtex files from two different papers without having to manually get rid of repeated entries.
Even better if there's some browser/web tool that does it. :)
I want to merge bibtex files from two different papers without having to manually get rid of repeated entries.
Even better if there's some browser/web tool that does it. :)
Thanks for the kind words!! ☺️
I cannot say enough about how grateful I am to be in @tyrellturing.bsky.social's lab. Absolutely fantastic mentor, great members, inspiring science and discussions. 🥹 🚀
I cannot say enough about how grateful I am to be in @tyrellturing.bsky.social's lab. Absolutely fantastic mentor, great members, inspiring science and discussions. 🥹 🚀
Shoutout to Roman Pogodin for his brilliant insights and collaboration 🔥 & PhD star @arnaghosh.bsky.social — an incredible talent! Gratitude to all the incredible members of @tyrellturing.bsky.social's lab and the inspiring environment at Mila for making this work possible 🙌 11/12 #TeamScience
October 31, 2024 at 4:53 PM
Thanks for the kind words!! ☺️
I cannot say enough about how grateful I am to be in @tyrellturing.bsky.social's lab. Absolutely fantastic mentor, great members, inspiring science and discussions. 🥹 🚀
I cannot say enough about how grateful I am to be in @tyrellturing.bsky.social's lab. Absolutely fantastic mentor, great members, inspiring science and discussions. 🥹 🚀
Looking for PhD/postdoc positions at the intersection of #AI and #compneuro?
@repromancer.bsky.social is an excellent scientist and an awesome mentor! 🚀
@repromancer.bsky.social is an excellent scientist and an awesome mentor! 🚀
Finally, I’m thrilled to announce I’ll be joining Leeds University's School of Computing to work on resource efficient AI and theories of biological intelligence! If you’re interested in work like this, I’ll be recruiting PhD students and postdocs. Please reach out to me or forward this on! 12/12
October 31, 2024 at 4:46 PM
Looking for PhD/postdoc positions at the intersection of #AI and #compneuro?
@repromancer.bsky.social is an excellent scientist and an awesome mentor! 🚀
@repromancer.bsky.social is an excellent scientist and an awesome mentor! 🚀
One of the most exciting projects I have been part of. Really excited that this is now out! 🤩
Extremely grateful to @repromancer.bsky.social for roping me in for the ride! 🙏
It is indeed a great time to be in #NeuroAI, can't wait to see how recent advances shape the fields of #compneuro and #AI
Extremely grateful to @repromancer.bsky.social for roping me in for the ride! 🙏
It is indeed a great time to be in #NeuroAI, can't wait to see how recent advances shape the fields of #compneuro and #AI
Why does #compneuro need new learning methods? ANN models are usually trained with Gradient Descent (GD), which violates biological realities like Dale’s law and log-normal weights. Here we describe a superior learning algorithm for comp neuro: Exponentiated Gradients (EG)! 1/12 #neuroscience 🧪
October 31, 2024 at 4:41 PM
One of the most exciting projects I have been part of. Really excited that this is now out! 🤩
Extremely grateful to @repromancer.bsky.social for roping me in for the ride! 🙏
It is indeed a great time to be in #NeuroAI, can't wait to see how recent advances shape the fields of #compneuro and #AI
Extremely grateful to @repromancer.bsky.social for roping me in for the ride! 🙏
It is indeed a great time to be in #NeuroAI, can't wait to see how recent advances shape the fields of #compneuro and #AI