Pingsheng Li
pingshengli.bsky.social
Pingsheng Li
@pingshengli.bsky.social
CS PhD, Neuro Undergrad @mcgillu, @Mila_Quebec, LiNC Lab @tyrell_turing, Prev Neuro-X @EPFL, intern @GatsbyUCL, Neuro AI Convergence DL&RL.
Reposted by Pingsheng Li
New preprint! 🧠🤖

How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?

We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!

🧵1/7
June 6, 2025 at 5:40 PM
Reposted by Pingsheng Li
Here's our latest preprint on neural decoders for spiking data. Stay tuned for the code (and hopefully, some exciting follow-ups)!
New preprint! 🧠🤖

How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?

We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!

🧵1/7
June 6, 2025 at 5:54 PM
Reposted by Pingsheng Li
I’m looking for interns to join our lab for a project on foundation models in neuroscience.

Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...).

Interested? See the details in the comments. (1/3)

🧠🤖
AI and Neuroscience | IVADO
ivado.ca
November 7, 2025 at 1:52 PM
There might be many kinds of intelligence, but there is only one reality.
How could they not, ultimately, converge?
May 1, 2025 at 6:42 AM
Reposted by Pingsheng Li
Come by Poster 068 to learn about why comp neuro studies should use exponentiated gradient descent!
March 29, 2025 at 5:22 PM
Reposted by Pingsheng Li
I turned 45 this weekend, which means I can finally become the man I was always destined to be:

youtu.be/KF2X1o0FGA0?...
He's Hip. He's Cool. He's 45!
YouTube video by Rosemary's Baby
youtu.be
January 20, 2025 at 9:17 PM
100 gradient updates per year 🥹
December 15, 2024 at 9:11 PM
A must read for neuroAI!!!!!!!!!!
After CNN, other great feature of the brain that specific has computational advantage.
Why does #compneuro need new learning methods? ANN models are usually trained with Gradient Descent (GD), which violates biological realities like Dale’s law and log-normal weights. Here we describe a superior learning algorithm for comp neuro: Exponentiated Gradients (EG)! 1/12 #neuroscience 🧪
October 31, 2024 at 6:45 PM