Mashbayar Tugsbayar
tmshbr.bsky.social
Mashbayar Tugsbayar
@tmshbr.bsky.social
PhD student in NeuroAI @Mila & McGill w/ Blake Richards. Top-down feedback and brainlike connectivity in ANNs.
I love ResNet too, but I'm floored they're cited more than transformers, CNNs and the DSM V!
April 16, 2025 at 6:07 PM
The model uses ReLU activation like standard DNNs and doesn’t spike. The way we modeled it, feedback would provide a very small amount of driving input but otherwise just gain-modulate neurons already activated by feedforward input.
April 16, 2025 at 4:28 PM
Last but not least, thank you to @tyrellturing.bsky.social and @neuralensemble.bsky.social!
April 15, 2025 at 8:40 PM
We'd like to thank @elife.bsky.social and the reviewers for a very constructive review experience. As well, thanks to our funders, in particular HIBALL, CIFAR, and NSERC. This work was supported with computational resources by @mila-quebec.bsky.social and the Digital Research Alliance of Canada.
April 15, 2025 at 8:36 PM
These results show that modulatory top-down feedback has unique computational implications. As such, we believe that top-down feedback should be incorporated into DNN models of the brain more often. Our code base makes that easy!
April 15, 2025 at 8:30 PM
We found that top-down feedback, as implemented in our models, helps to determine the set of solutions available to the networks and the regional specializations that they develop.
April 15, 2025 at 8:30 PM
To summarize, we built a codebase for creating DNNs with top-down feedback, and we used it to examine the impact of top-down feedback on audio-visual integration tasks.
April 15, 2025 at 8:30 PM
The models were then trained to identify either the auditory or visual stimuli based on an attention cue. The visual bias not only persisted, but helped the brainlike model learn to ignore distracting audio more quickly than other models.
April 15, 2025 at 8:29 PM
We found that the brain-based model still had a visual bias even after being trained on auditory tasks. But, this bias didn’t hamper the model’s overall performance, and it mimics a consistently observed human visual bias (Posner et al 1974)
April 15, 2025 at 8:27 PM
Conversely, when trained on a similar set of auditory categorization tasks, the human brain-based model was the best at integrating helpful visual information to resolve auditory ambiguity.
April 15, 2025 at 8:27 PM
Interestingly, compared to other models, the human brain-based model was particularly proficient at ignoring irrelevant audio stimuli that didn’t help to resolve ambiguities.
April 15, 2025 at 8:25 PM
To test the impact of different anatomies of modulatory feedback, we compared the performance of a model based on human anatomy with identically sized models with different configurations of feedback/feedforward connectivity.
April 15, 2025 at 8:23 PM
As an initial test, we wanted to see how using modulatory feedback could impact computation. To do this, we built an audio-visual model, based on human anatomy from the BigBrain and MICA-MICs datasets, and trained it to classify ambiguous stimuli.
April 15, 2025 at 8:21 PM
Each brain region is a recurrent convolutional network, and can receive two different types of input: driving feedforward and modulatory feedback. With this code, users can input macroscopic connectivity to build anatomically constrained DNNs.
April 15, 2025 at 8:20 PM
To model top-down feedback in neocortex, we built a freely available codebase that can be used to construct multi-input, topological, top-down and laterally recurrent DNNs that mimic neural anatomy. (github.com/masht18/conn... )
April 15, 2025 at 8:18 PM
What does it mean to have “biologically-inspired top-down feedback”? In the brain, feedback does not drive pyramidal neurons directly, but it modulates the feedforward signal (both multiplicatively and additively), as described in Larkum et al 2004.
April 15, 2025 at 8:18 PM