timdettmers.com/2025/12/10/w...
This essay is worth reading. Discusses diminishing returns (and risks) of scaling. The contrast between West and East: “Winner takes all” approach of building the biggest thing vs a long-term focus on practicality.
We have put together some exciting educational workshops on cognitive benchmarking large models, RL and video games and dynamical systems! More info and registration here: main-educational.github.io/program/
We have put together some exciting educational workshops on cognitive benchmarking large models, RL and video games and dynamical systems! More info and registration here: main-educational.github.io/program/
www.nature.com/articles/s41...
www.nature.com/articles/s41...
2026.ccneuro.org
2026.ccneuro.org
Setting a specific time-of-day deadline is *ri*dic*ul*ous* and super annoying!!!!
Do you really care if I submit this lettter at 6PM rather than 4PM?
Were you planning on reviewing my letter that evening?
Get real...
#academia
Setting a specific time-of-day deadline is *ri*dic*ul*ous* and super annoying!!!!
Do you really care if I submit this lettter at 6PM rather than 4PM?
Were you planning on reviewing my letter that evening?
Get real...
#academia
🚨Thrilled to share "Caption This, Reason That", a #NeurIPS2025 Spotlight! 🔦
Meet us at #2112, 3 Dec 11 a.m.
We analyze VLM limitations through the lens of Cognitive Science (Perception, Attention, Memory) and propose a simple "Self-Captioning" method that boosts spatial reasoning by ~18%.
🧵👇
www.biorxiv.org/content/10.1...
www.biorxiv.org/content/10.1...
Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...).
Interested? See the details in the comments. (1/3)
🧠🤖
Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...).
Interested? See the details in the comments. (1/3)
🧠🤖
tl;dr: we find that during pretraining LLMs undergo consistent cycles of expansion/recuction in the dimensionality of their representations & these cycles correlate with the emergence of new capabilities.
How does the complexity of this mapping change across LLM training? How does it relate to the model’s capabilities? 🤔
Announcing our #NeurIPS2025 📄 that dives into this.
🧵below
#AIResearch #MachineLearning #LLM
tl;dr: we find that during pretraining LLMs undergo consistent cycles of expansion/recuction in the dimensionality of their representations & these cycles correlate with the emergence of new capabilities.
www.thetransmitter.org/computationa...
More info: ivado.ca/en/scholarsh...
#AI #ArtificialIntelligence #R3AI #Funding #Postdoc
mila.quebec/en/news/ai-r...
www.nature.com/articles/d41...
Oh wait, yes I do. I wrote a whole article about it last year: www.thetransmitter.org/publishing/a...
More info: ivado.ca/en/scholarsh...
#AI #ArtificialIntelligence #R3AI #Funding #Postdoc
P.s. I also have an opening for a PhD student in my lab for Fall 2026.
P.s. I also have an opening for a PhD student in my lab for Fall 2026.
#neuroAI #compneuro jobs.utoronto.ca/job/Toronto-...
#neuroAI #compneuro jobs.utoronto.ca/job/Toronto-...
kempnerinstitute.harvard.edu/kempner-inst...
kempnerinstitute.harvard.edu/kempner-inst...
Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning
So what role should symbols play in theories of the mind? For our answer...read on!
Paper: arxiv.org/abs/2508.05776
1/n