Shahab Bakhtiari
banner
shahabbakht.bsky.social
Shahab Bakhtiari
@shahabbakht.bsky.social
|| assistant prof at University of Montreal || leading the systems neuroscience and AI lab (SNAIL: https://www.snailab.ca/) 🐌 || associate academic member of Mila (Quebec AI Institute) || #NeuroAI || vision and learning in brains and machines
Pinned
So excited to see this preprint released from the lab into the wild.

Charlotte has developed a theory for how learning curriculum influences learning generalization.
Our theory makes straightforward neural predictions that can be tested in future experiments. (1/4)

🧠🤖 🧠📈 #MLSky
🚨 New preprint alert!

🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈

A 🧵:

tinyurl.com/yr8tawj3
The curriculum effect in visual learning: the role of readout dimensionality
Generalization of visual perceptual learning (VPL) to unseen conditions varies across tasks. Previous work suggests that training curriculum may be integral to generalization, yet a theoretical explan...
tinyurl.com
Reposted by Shahab Bakhtiari
Breaking: we release a fully synthetic generalist dataset for pretraining, SYNTH and two new SOTA reasoning models exclusively trained on it. Despite having seen only 200 billion tokens, Baguettotron is currently best-in-class in its size range. pleias.fr/blog/blogsyn...
November 10, 2025 at 5:30 PM
Reposted by Shahab Bakhtiari
My reviewing style has changed over time. Rather than litigate every little thing, and pushing my own ideas, I focus only on 2 things:
(1) Are the claims interesting/important?
(2) Does the evidence support the claims?

Most of my reviews these days are short and focused.
November 8, 2025 at 11:22 AM
Reposted by Shahab Bakhtiari
Watson was a racist who, "near the end of his life, faced condemnation and professional censure for offensive remarks, including saying Black people are less intelligent than white people"
James Watson, co-discoverer of the double-helix shape of DNA, has died at age 97
Scientist James Watson, who shared a Nobel prize for helping discover the double-helix shape of the DNA molecule, has died. He was 97.
apnews.com
November 7, 2025 at 8:07 PM
Reposted by Shahab Bakhtiari
I was very honoured to receive the Queen Elizabeth Prize for Engineering from His Majesty King Charles III this week, and pleased to hear his thoughts on AI safety as well as his hopes that we can minimize the risks while collectively reaping the benefits.
November 7, 2025 at 9:33 PM
Reposted by Shahab Bakhtiari
I’m looking for interns to join our lab for a project on foundation models in neuroscience.

Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...).

Interested? See the details in the comments. (1/3)

🧠🤖
AI and Neuroscience | IVADO
ivado.ca
November 7, 2025 at 1:52 PM
I’m looking for interns to join our lab for a project on foundation models in neuroscience.

Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...).

Interested? See the details in the comments. (1/3)

🧠🤖
AI and Neuroscience | IVADO
ivado.ca
November 7, 2025 at 1:52 PM
Reposted by Shahab Bakhtiari
The future of Canadian research
November 6, 2025 at 10:59 PM
Reposted by Shahab Bakhtiari
Maybe I'm in the minority of neuroscientists, but I'm deflationary about terms like intelligence. If AI passes tests that were designed to measure these things, then we can say they have them (like we do for humans). And this is mostly what the original article says (the headline is inaccurate)
As a neuroscientist, I’d suggest there is a profound disconnect between what *some* computer scientists think is representative of “intelligence”, cognitive ability, or descriptions of consciousness from some in AI work.

LLMs are not how neural systems process information, nor how brains function.
November 6, 2025 at 8:42 PM
Reposted by Shahab Bakhtiari
Wrote a short piece arguing that higher ed must help steer AI. TLDR: If we outsource this to tech, we outsource our whole business. But rejectionism is basically stalling. If we want to survive, schools themselves must proactively shape AI for education & research. [1/6, unpaywalled at 5/6] +
Opinion | AI Is the Future. Higher Ed Should Shape It.
If we want to stay at the forefront of knowledge production, we must fit technology to our needs.
www.chronicle.com
November 4, 2025 at 7:55 PM
Can’t agree more.

Still trying to wrap my head around the budget news. I want to move on, but honestly can’t.

Yes, there’s some self-interest here, but adding another well-established researcher to an already deprived funding landscape isn’t exactly encouraging.
It’s hard not to feel insulted by the Canadian federal budget’s implication that researchers currently working in Canada aren’t good enough, so they’ll spend a bunch of taxpayer money to bring in “top talent” from elsewhere.
November 5, 2025 at 2:38 PM
Reposted by Shahab Bakhtiari
If we’re talking about ‘generational investment’, Canada needs faculty positions for postdocs.

There have been virtually no cog psych/neuro jobs in the past three years.
November 5, 2025 at 2:22 PM
Reposted by Shahab Bakhtiari
Quite astounding that we are still pushing this kind of program, despite the Bouchard report and all the numerous stories of ‘stars’ coming from abroad who come to Canada, but keep their former lab, and then go back after 5 years because Canada is too cold or whatever.
“Budget 2025 proposes to provide $1 billion over 13 years, starting in 2025-26, to ….(Tricouncil)…to launch an accelerated research Chairs initiative to recruit exceptional international researchers to Canadian universities.” 🙄 1/
November 5, 2025 at 1:44 PM
Reposted by Shahab Bakhtiari
Here's an interesting new study exploring whether LLMs are able to understand the narrative sequencing of comics and... even the best AI models are *terrible* at it for pretty much all tasks that were analyzed aclanthology.org/2025.finding...
Beyond Single Frames: Can LMMs Comprehend Implicit Narratives in Comic Strip?
Xiaochen Wang, Heming Xia, Jialin Song, Longyu Guan, Qingxiu Dong, Rui Li, Yixin Yang, Yifan Pu, Weiyao Luo, Yiru Wang, Xiangdi Meng, Wenjie Li, Zhifang Sui. Findings of the Association for Computatio...
aclanthology.org
November 4, 2025 at 8:05 PM
Reposted by Shahab Bakhtiari
Came here to see if I was interpreting this correctly. Seems like I am.
November 4, 2025 at 9:34 PM
Reposted by Shahab Bakhtiari
Francis Crick's book The astonishing Hypothesis drw me into the field of neuroscience. He made huge contributions. but worth remembering...
1/n
"a magisterial new biography" - congrats @matthewcobb.bsky.social! #histSTM
Book review 📚 Sex, drugs and the conscious brain: Francis Crick beyond the double helix

go.nature.com/4oJQAra
November 4, 2025 at 12:51 PM
Reposted by Shahab Bakhtiari
The only reason to violate the confidentiality of peer reivew is to undermine it. The only reason to demand this kind of information is to abuse it.
November 4, 2025 at 1:17 PM
This is super cool!
CorText also responds to in-silico microstimulations in line with experimental predictions: For example, when amplifying face-selective voxels for trials where no people were shown to the participant, CorText starts hallucinating them. With inhibition we can "remove people”. 7/n
November 3, 2025 at 3:53 PM
Reposted by Shahab Bakhtiari
We managed to integrate brain scans into LLMs for interactive brain reading and more.. check out Vicky's post below. Super excited about this one!
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan 🧠💬

1/n
November 3, 2025 at 3:21 PM
Reposted by Shahab Bakhtiari
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan 🧠💬

1/n
November 3, 2025 at 3:17 PM
Reposted by Shahab Bakhtiari
Academics in Assyria in the 7th c BC complain that admin is preventing them from doing research and teaching
November 3, 2025 at 10:04 AM
Reposted by Shahab Bakhtiari
I wrote a thing on episodic memory and systems consolidation. I hope you all enjoy it and/or find it interesting.

A neural state space for episodic memories

www.sciencedirect.com/science/arti...

#neuroskyence #psychscisky #cognition 🧪
A neural state space for episodic memories
Episodic memories are highly dynamic and change in nonlinear ways over time. This dynamism is not captured by existing systems consolidation theories …
www.sciencedirect.com
November 3, 2025 at 12:56 PM
Reposted by Shahab Bakhtiari
Great Blueprint from @arnaghosh.bsky.social on our newest paper on representational geometry!

tl;dr: we find that during pretraining LLMs undergo consistent cycles of expansion/recuction in the dimensionality of their representations & these cycles correlate with the emergence of new capabilities.
LLMs are trained to compress data by mapping sequences to high-dim representations!
How does the complexity of this mapping change across LLM training? How does it relate to the model’s capabilities? 🤔
Announcing our #NeurIPS2025 📄 that dives into this.

🧵below
#AIResearch #MachineLearning #LLM
October 31, 2025 at 4:37 PM
Reposted by Shahab Bakhtiari
LLMs are trained to compress data by mapping sequences to high-dim representations!
How does the complexity of this mapping change across LLM training? How does it relate to the model’s capabilities? 🤔
Announcing our #NeurIPS2025 📄 that dives into this.

🧵below
#AIResearch #MachineLearning #LLM
October 31, 2025 at 4:19 PM
Reposted by Shahab Bakhtiari
Canadian researchers should be aware the there is a motion before the Parliamentary Standing Committee on Science and Research to force Tricouncils to hand over disaggregated peer review data on all applications:
Applicant names, profiles, demographics
Reviewers names, profiles, comments, and scores
October 30, 2025 at 8:33 PM
Cool interpretability work from @anthropic.com

transformer-circuits.pub/2025/introsp...

Though it takes some effort to work through without getting bogged down by the loaded terminology, starting with "introspection" itself.

#MLSky 🧠🤖
Emergent Introspective Awareness in Large Language Models
transformer-circuits.pub
October 30, 2025 at 4:06 AM