Dongyan Lin
banner
dongyanl1n.bsky.social
Dongyan Lin
@dongyanl1n.bsky.social
Postdoc at Meta FAIR, Comp Neuro PhD @McGill / Mila. Looking at the representation in brains and machines 🔬 https://dongyanl1n.github.io/
Pinned
New work: AI agents learn from adult data, inheriting our biases when making decisions. What if we trained them to think like kids, who rigorously explore and make causal inferences?
Language model (LM) agents are all the rage now—but they may exhibit cognitive biases when inferring causal relationships!

We evaluate LMs on a cognitive task to find:
- LMs struggle with certain simple causal relationships
- They show biases similar to human adults (but not children)

🧵⬇️
Reposted by Dongyan Lin
Great piece by @natolambert.bsky.social on the current state of human exhaustion in the AI world. Makes this important point:
October 25, 2025 at 3:05 PM
Reposted by Dongyan Lin
This work has been accepted to #COLM2025. If you are in Montreal this week for COLM and would like to chat about this (or anything related to discovery / exploration / RL), drop me a note!

Poster session 2: Tuesday Oct 7, 4:30-6:30pm
Poster number 68
Language model (LM) agents are all the rage now—but they may exhibit cognitive biases when inferring causal relationships!

We evaluate LMs on a cognitive task to find:
- LMs struggle with certain simple causal relationships
- They show biases similar to human adults (but not children)

🧵⬇️
October 5, 2025 at 7:40 PM
Reposted by Dongyan Lin
Nice write-up on the top-performing models of Algonauts 2025 competition by @humanscotti.bsky.social

paulscotti.substack.com/p/insights-f...

None of the teams trained the models from scratch, mainly relied on pre-trained foundation models, and ensembling seemed to be the key ingredient.

🧠🤖
Insights from the Algonauts 2025 Winners
What did we learn from a contest to predict fMRI brain activity to movie viewing?
paulscotti.substack.com
August 18, 2025 at 1:55 PM
Reposted by Dongyan Lin
Levine's take on the success of LLMs compared to video models is interesting, but I'll expand on how efforts toward AI could take two different paths, and why I think AI and NeuroAI could take different approaches moving forward. 🧵

🧠🤖 #MLSky
AI may still need some neuroscience:

"AI systems will not acquire the flexibility and adaptability of human intelligence until they can actually learn like humans do, shining brightly with their own light rather than observing a shadow from ours."

🧠🤖

sergeylevine.substack.com/p/language-m...
Language Models in Plato's Cave
Why language models succeeded where video models failed, and what that teaches us about AI
sergeylevine.substack.com
June 12, 2025 at 2:30 PM
Reposted by Dongyan Lin
Alona Fyshe @alonaf.bsky.social on the BabyLM Challenge, a competition that trains language models (LMs) on smaller datasets, more akin to how a baby learns, in search of solutions to some of the major challenges of today’s LLMs.

#NeuroAI #neuroskyence

www.thetransmitter.org/neuroai/the-...
Can babies inspire more efficient learning algorithms?
A competition that trains language models on smaller datasets, more akin to how a baby learns, seeks solutions to some of LLM’s major challenges.
www.thetransmitter.org
May 19, 2025 at 1:23 PM
New work: AI agents learn from adult data, inheriting our biases when making decisions. What if we trained them to think like kids, who rigorously explore and make causal inferences?
Language model (LM) agents are all the rage now—but they may exhibit cognitive biases when inferring causal relationships!

We evaluate LMs on a cognitive task to find:
- LMs struggle with certain simple causal relationships
- They show biases similar to human adults (but not children)

🧵⬇️
May 16, 2025 at 6:19 PM
Reposted by Dongyan Lin
Fascinating preprint from with our "blicket detector" paradigm from Chen et al at NYU& Mila. LLM's make the same causal inference mistakes that adults make but 4 year olds don't! Of course, models are trained on adult data, kids figure it out for themselves.
im-ant.github.io/publications...
im-ant.github.io
May 15, 2025 at 6:33 PM
Reposted by Dongyan Lin
Preprint Alert 🚀

Can we simultaneously learn transformation-invariant and transformation-equivariant representations with self-supervised learning?

TL;DR Yes! This is possible via simple predictive learning & architectural inductive biases – without extra loss terms and predictors!

🧵 (1/10)
May 14, 2025 at 12:53 PM
Reposted by Dongyan Lin
Want to spend 3 weeks in South Africa for an unforgettable summer school experience? Imbizo 2026 (imbizo.africa) student applications are OPEN! Lectures, new friends, and Noordhoek beach await. Apply by July 1!

More info and apply: imbizo.africa/apply/

#Imbizo2026 #CompNeuro
May 1, 2025 at 10:06 AM
Reposted by Dongyan Lin
on my way back to NYC, i met wise Leon Bottou in the airport. we talked. then i told him "you should tweet that!"

and, he delivered much more than a tweet: a blog post with thoughts and insights on AI research only he can deliver this clearly and succinctly.

leon.bottou.org/news/two_les...
April 30, 2025 at 8:09 PM
Reposted by Dongyan Lin
It feels so good to see Imbizo students doing great things! @imbizo.bsky.social #imbizo
April 16, 2025 at 3:17 PM
Reposted by Dongyan Lin
Top-down feedback is ubiquitous in the brain and computationally distinct, but rarely modeled in deep neural networks. What happens when a DNN has biologically-inspired top-down feedback? 🧠📈

Our new paper explores this: elifesciences.org/reviewed-pre...
Top-down feedback matters: Functional impact of brainlike connectivity motifs on audiovisual integration
elifesciences.org
April 15, 2025 at 8:11 PM
Reposted by Dongyan Lin
To my Canadian colleagues: you should know that a key component of what the US is doing right now involves purging scientists & research programs for ideological impurity & (illegally) reclaiming their funds.

If you don’t want a repeat of what happened down south here in Canada, PP is not your guy
“I think this is the first time a Cdn poli has crossed that line to officially say they want to interfere to control research topics. It could be a very terrible time for us.” Sound familiar?
In Cdn election, Poilievre vows to end ’woke ideology’ in science funding www.science.org/content/arti...
In Canadian election, top Conservative candidate vows to end ’woke ideology’ in science funding
Pierre Poilievre’s Conservative Party is trying to topple Liberal government in 28 April election
www.science.org
April 10, 2025 at 12:12 PM
Reposted by Dongyan Lin
Francis Crick called it “impossible” in 1979. Now it’s real.
A 1 mm cube of mouse brain, 200k cells, 500M synapses, all mapped alongside neuronal activity.

This new MICrONS dataset maps brain structure and function in a way that we've never seen🧪
🔗 www.microns-explorer.org
📄 doi.org/10.1038/s415...
April 9, 2025 at 6:45 PM
Reposted by Dongyan Lin
Can LLMs be used to discover interpretable models of human and animal behavior?🤔

Turns out: yes!

Thrilled to share our latest preprint where we used FunSearch to automatically discover symbolic cognitive models of behavior.
1/12
February 10, 2025 at 12:21 PM
Reposted by Dongyan Lin
🎉 Exciting day at #Imbizo2025! Dr. Dan Wetmore & Dr. Garrick Orchard from Meta’s Reality Labs shared their industrial insights on designing top-notch AI/ML experiments, including innovative BCI applications 🤖💡 Students showcased creative solutions for EMG, sleep, wind, epilepsy monitoring & more! 🌟
January 30, 2025 at 9:50 AM
Reposted by Dongyan Lin
Christmas Eve, 1968:

“And from the crew of Apollo 8, we close, with good night, good luck, a Merry Christmas, and God bless all of you, all of you on the good Earth.”
December 25, 2024 at 5:46 AM
Reposted by Dongyan Lin
Delighted to be in Leeds joining the School of Computing! Fantastic first impressions — like a "less offensive London" (youtu.be/watch?v=_6_VVLgrgFI). Stay tuned for a PhD position starting next October. Meanwhile, drop me a message with your CV and research interests—I'd love to hear from you!
Pleased to welcome @repromancer.bsky.social - a computational neuroscientist - to the @universityofleeds.bsky.social today - working on the boundary of neuroscience and AI. Welcome Jonathan!
November 29, 2024 at 4:02 PM
Neuroscience is starting to grow past relying on single neuron selectivity to understand computations, now that we have more data and better analyses. This could have interesting implications for mechanistic interpretability in AI models as well!
Important paper!

www.biorxiv.org/content/bior...

Im not sure the Discussion fully delineates its radical implications.

No more...

* Place cells
* Grid cells, splitter cells, border cells
* Mirror neurons
* Reward neurons
* Conflict cells

(continued)
www.biorxiv.org
November 22, 2024 at 5:38 PM
Reposted by Dongyan Lin
How continuous neural activity learns and support discrete, symbolic & compositional processes remains an important question for cog. sci. and AI. In this preprint we explore ways in which both symbolic and sub-symbolic processing could be achieved using attractor dynamics. arxiv.org/abs/2310.01807
Discrete, compositional, and symbolic representations through attractor dynamics
Symbolic systems are powerful frameworks for modeling cognitive processes as they encapsulate the rules and relationships fundamental to many aspects of human reasoning and behavior. Central to these ...
arxiv.org
October 16, 2024 at 10:28 PM
Reposted by Dongyan Lin
The latest MotorNet release (v0.2.0) is live!

This includes big changes, first and foremost, a COMPLETE SWAP from TensorFlow to PyTorch.

As usual, you can install it via a pip command.

motornet.org
MotorNet 0.2.0 documentationContentsMenuExpandLight modeDark modeAuto light/dark mode
motornet.org
January 5, 2024 at 3:59 PM
Our new work “Temporal encoding in deep reinforcement learning agents” is now published!

We zoomed in on the representation of time in brains and machines, and asked how it linked to behavior.

w/ Ann Zixiang Huang, @tyrellturing.bsky.social

nature.com/articles/s41...

🧵(0/10)
Temporal encoding in deep reinforcement learning agents - Scientific Reports
Scientific Reports - Temporal encoding in deep reinforcement learning agents
nature.com
December 18, 2023 at 9:17 PM
Check out our new work!
1/ What is the organization of mouse visual cortex across regions? In our latest work led by Rudi Tong and Stuart Trenholm, now out on bioRxiv (www.biorxiv.org/content/10.1...), we mapped the "feature landscape" of mouse visual cortex.

Here is a #blueprint thread about what we found. #neuroskyence
November 25, 2023 at 6:04 PM
Reposted by Dongyan Lin
In our lab's Journal Clubs, we decided to assign one of the trainees to create a very short summary for each paper we read, using a template (background / methods / key findings / implications). #AcademicSky #Neuroskyence (1/n)
October 3, 2023 at 8:33 PM