Mashbayar Tugsbayar
tmshbr.bsky.social
Mashbayar Tugsbayar
@tmshbr.bsky.social
PhD student in NeuroAI @Mila & McGill w/ Blake Richards. Top-down feedback and brainlike connectivity in ANNs.
Reposted by Mashbayar Tugsbayar
I’m grateful to share that our paper has been published in Nature. This work formed the core of my PhD research at McGill University.

We show that hippocampal neurons that initially encode reward progressively reorganize to reflect predictive representations of reward during learning.
January 16, 2026 at 6:48 PM
Reposted by Mashbayar Tugsbayar
Our paper on the "Oneirogen hypothesis" is now up in its revised form on eLife!

This is the hypothesis that psychedelics induce a dream-like state, which we show via modelling could explain a variety of perceptual and learning effects from such drugs.

elifesciences.org/reviewed-pre...

🧠📈 🧪
The oneirogen hypothesis: modeling the hallucinatory effects of classical psychedelics in terms of replay-dependent plasticity mechanisms
elifesciences.org
January 14, 2026 at 3:32 PM
Reposted by Mashbayar Tugsbayar
Are you thinking about doing neuroscience outreach but want to make it more exciting or hands on?

Check out RetINaBox! (A collab led by the Trenholm lab)

We tried to bring the experience of experimental neuroscience to a classroom setting:

www.eneuro.org/content/13/1...

#neuroscience 🧪
RetINaBox: A Hands-On Learning Tool for Experimental Neuroscience
An exciting aspect of neuroscience is developing and testing hypotheses via experimentation. However, due to logistical and financial hurdles, the experiment and discovery component of neuroscience is...
www.eneuro.org
January 13, 2026 at 2:56 PM
Reposted by Mashbayar Tugsbayar
🚨 New preprint alert!

🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈

A 🧵:

tinyurl.com/yr8tawj3
The curriculum effect in visual learning: the role of readout dimensionality
Generalization of visual perceptual learning (VPL) to unseen conditions varies across tasks. Previous work suggests that training curriculum may be integral to generalization, yet a theoretical explan...
tinyurl.com
September 30, 2025 at 2:26 PM
Reposted by Mashbayar Tugsbayar
🧠🤖 Computational Neuroscience summer school IMBIZO in Cape Town is open for applications again!
 
💻🧬 3 weeks of intense coursework & projects with support from expert tutors and faculty
 
📈Apply until July 1st!

🔗https://imbizo.africa/
May 8, 2025 at 8:19 AM
Reposted by Mashbayar Tugsbayar
Want to spend 3 weeks in South Africa for an unforgettable summer school experience? Imbizo 2026 (imbizo.africa) student applications are OPEN! Lectures, new friends, and Noordhoek beach await. Apply by July 1!

More info and apply: imbizo.africa/apply/

#Imbizo2026 #CompNeuro
May 1, 2025 at 10:06 AM
Top-down feedback is ubiquitous in the brain and computationally distinct, but rarely modeled in deep neural networks. What happens when a DNN has biologically-inspired top-down feedback? 🧠📈

Our new paper explores this: elifesciences.org/reviewed-pre...
Top-down feedback matters: Functional impact of brainlike connectivity motifs on audiovisual integration
elifesciences.org
April 15, 2025 at 8:11 PM
Reposted by Mashbayar Tugsbayar
Excited to share our new pre-print on bioRxiv, in which we reveal that feedback-driven motor corrections are encoded in small, previously missed neural signals.
April 7, 2025 at 2:55 PM
Reposted by Mashbayar Tugsbayar
Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning
April 1, 2025 at 6:24 PM
Reposted by Mashbayar Tugsbayar
At #Cosyne2025? Come by my poster today (3-047) to hear how sequential predictive learning produces a continuous neural manifold with the ability to generate replay during sleep, and spatial representations that "sweep" ahead to future positions. All from sensory information alone!
March 29, 2025 at 1:30 PM
Reposted by Mashbayar Tugsbayar
Very excited for the upcoming Cosyne in Montreal! I’ll be presenting my poster [2-126] Brain-like neural dynamics for behavioral control develop through reinforcement learning, on the Friday session at 13:15.

Feel free to drop by! The related pre-print is also out:
www.biorxiv.org/content/10.1...
Brain-like neural dynamics for behavioral control develop through reinforcement learning
During development, neural circuits are shaped continuously as we learn to control our bodies. The ultimate goal of this process is to produce neural dynamics that enable the rich repertoire of behavi...
www.biorxiv.org
March 26, 2025 at 10:58 PM
Reposted by Mashbayar Tugsbayar
📢 We have a new #NeuroAI postdoctoral position in the lab!

If you have a strong background in #NeuroAI or computational neuroscience, I’d love to hear from you.

(Repost please)

🧠📈🤖
March 14, 2025 at 1:02 PM
Reposted by Mashbayar Tugsbayar
The problem with current SSL? It's hungry. Very hungry. 🤖

Training time: Weeks
Dataset size: Millions of images
Compute costs: 💸💸💸

Our #NeurIPS2024 poster makes SSL pipelines 2x faster and achieves similar accuracy at 50% pretraining cost! 💪🏼✨
🧵 1/8
December 13, 2024 at 3:44 AM
Reposted by Mashbayar Tugsbayar
Why does #compneuro need new learning methods? ANN models are usually trained with Gradient Descent (GD), which violates biological realities like Dale’s law and log-normal weights. Here we describe a superior learning algorithm for comp neuro: Exponentiated Gradients (EG)! 1/12 #neuroscience 🧪
October 28, 2024 at 5:18 PM