Kanaka Rajan
banner
kanakarajanphd.bsky.social
Kanaka Rajan
@kanakarajanphd.bsky.social
Associate Professor at Harvard & Kempner Institute. Applying computational frameworks & machine learning to decode multi-scale neural processes. Marathoner. Rescue dog mom. https://www.rajanlab.com/
(8/8) To apply POCO to your own work, find our open source code on github below 👇

github.com/yuvenduan/POCO
GitHub - yuvenduan/POCO: Official Implementation for POCO: Scalable Neural Forecasting through Population Conditioning
Official Implementation for POCO: Scalable Neural Forecasting through Population Conditioning - yuvenduan/POCO
github.com
September 12, 2025 at 8:46 PM
(7/8) Thanks to @deisseroth.bsky.social‬, @mishaahrens.bsky.social & Chris Harvey for their contributions, and to @kempnerinstitute.bsky.social & @harvardmed.bsky.social‬ for supporting computational neuroscience research.

Read the paper here: arxiv.org/abs/2506.14957
POCO: Scalable Neural Forecasting through Population Conditioning
Predicting future neural activity is a core challenge in modeling brain dynamics, with applications ranging from scientific investigation to closed-loop neurotechnology. While recent models of populat...
arxiv.org
September 12, 2025 at 8:46 PM
(6/8) Combined with its prediction speed and steady improvement from longer recordings/more sessions, POCO shows enormous potential for usage in larger brains & real-time neurotechnologies like “neuro-foundation models” for brain-computer interfaces (BCI).
September 12, 2025 at 8:46 PM
(5/8) Other time-series forecasting models perform well on synthetic/simulated data 🤖

POCO dominates in context-dense predictions based on REAL neural data 🧠
September 12, 2025 at 8:33 PM
(4/8) Beyond neural predictions, POCO's learned unit embeddings independently reproduce brain region clustering without any anatomical labels.

That means at single-cell resolution across entire brains, POCO mimics biological organization purely from neural activity patterns ✨
September 12, 2025 at 8:33 PM
(3/8) POCO forecasts how the brain will behave up to ~15 seconds into the future across behavioral data & species 🔮

After pre-training, POCO’s speed & flexibility allow it to adapt to new recordings with minimal fine-tuning, opening the door for real-time applications.
September 12, 2025 at 8:33 PM
(2/8) POCO was trained on spontaneous & task-specific behavior data from zebrafish, mice, & C. elegans. It combines a local forecaster with a population encoder capturing brain-wide patterns, so we track each neuron individually AND how the whole brain affects each cell 🧠
September 12, 2025 at 8:33 PM
(7/7) Congrats to Riley & Ryan on this work. Also huge thanks to collaborators Felix Berg, @raymondrchua.bsky.social‬, John Vastola, @joshlunger.bsky.social, Billy Qian & everyone who helps us kick the tires.
July 2, 2025 at 6:34 PM
(6/7) A 4096-unit agent that remembers, plans & navigates risks gives a “window-sized” brain we can watch neuron-by-neuron. ForageWorld is a perfect sandbox for testing cognitive map theories & offers a blueprint for ultra-efficient autonomous AI systems in a naturalistic world.
July 2, 2025 at 6:34 PM
(5/7) Analyzing the trained agent reveals an interpretable neural GPS: past & future positions can be linearly decoded over long horizons from the agent’s ‘neural’ activity, and a lightweight “predict-its-own-position” signal sharpens its compass even further.
July 2, 2025 at 6:34 PM
(4/7) What we see is planning & recall over hundreds of timesteps!

After a quick wander, the agent switches from exploring to visiting patches from memory: revisiting food not seen for over 500-1000 steps, skirting predator zones & timing resource visits.
July 2, 2025 at 6:34 PM
(3/7) For the agent’s “brain,” we used a lean recurrent network: 4096 units (<0.2% of the size of an ant brain), with only 10% connectivity & we let RL teach it what to do by trial & error.
July 2, 2025 at 6:34 PM
(2/7) Introducing ForageWorld: Each session spawns a large arena with lakes, predators & food patches that deplete over time. The AI agent must juggle hunger, thirst & fatigue in this virtual space.

The agent can only "see" a small patch around itself, so no bird’s-eye view.
July 2, 2025 at 6:34 PM