Adam J. Eisen
adamjeisen.bsky.social
Adam J. Eisen
@adamjeisen.bsky.social
Computational Neuroscientist + ML Researcher | Control theory + deep learning to understand the brain | PhD Candidate @ MIT | (he) 🍁
13/ 😀Feel free to reach out to discuss this work, or the application of it to your field of study. Or come swing by our poster at #NeurIPS2025. We’d love to chat!

📄 Paper: openreview.net/forum?id=I82...
💾 Code: github.com/adamjeisen/J...
📍 Poster: Thu 4 Dec 11am - 2pm PST (#2111)
Characterizing control between interacting subsystems with deep...
Biological function arises through the dynamical interactions of multiple subsystems, including those between brain areas, within gene regulatory networks, and more. A common approach to...
openreview.net
November 26, 2025 at 7:32 PM
12/🙏🏻Thanks for following along. And a HUGE thanks to @neurostrow.bsky.social @sarthakc.bsky.social @leokoz8.bsky.social and my advisors @earlkmiller.bsky.social + Ila Fiete for being fantastic collaborators on this project!
November 26, 2025 at 7:32 PM
11/🚀This work opens the door to many questions. We're now equipped to ask: what are the control laws governing how we control our attention? And how do these interactions break down in psychiatric conditions?
November 26, 2025 at 7:32 PM
10/⚙️Loop Closure 🔁
We also ensure integrals around closed loops are zero. This forces the model to learn the tangent space dynamics over the whole data manifold, not just along the flow. Why? To control a system, you have to know what happens off its normal path.
November 26, 2025 at 7:32 PM
9/⚙️Time-series Prediction ⛰️
The path integral of Jacobians depends on endpoints, not the path. Think of a mountain peak: your elevation is the same regardless of the trail taken. We parameterize the Jacobian with a deep network, and use this insight for time-series prediction.
November 26, 2025 at 7:32 PM
8/⚡Controlling neural dynamics
We also used our framework to actively control the network based purely on observed data. By stimulating the sensory area in a targeted way, we precisely manipulated the RNN's behavior and forced it to make a specific incorrect choice.
November 26, 2025 at 7:32 PM
7/🎛️ Control between areas
We applied our framework to a simplified model of interacting brain areas: a multi-area recurrent neural network (RNN) trained on a working memory task. After learning the task, its "sensory" area gained control over its "cognitive" area.
November 26, 2025 at 7:32 PM
6/🎯 In rigorous tests, JacobianODEs accurately predicted dynamics and outperformed NeuralODEs on Jacobian estimation, even in noisy, high-dimensional chaotic systems. Accurate control starts with accurate Jacobians, so this was an important check.

Now what can we do with it?👇
November 26, 2025 at 7:32 PM
5/🔎 Estimating the Jacobian from data is difficult. To do so, we developed JacobianODE, a deep learning framework that leverages geometric properties of the Jacobian to infer it from data.

Scroll down the thread to learn how it works. For now, does it work?
November 26, 2025 at 7:32 PM
4/💫Our method centers on the Jacobian, a mathematical object that provides a moment-to-moment snapshot of how a change in one subsystem affects another. This view of control from the local tangent space allows us to capture rich, context-dependent control dynamics.
November 26, 2025 at 7:32 PM
3/🎛️Control theory offers a powerful lens to understand these interactions. It describes how inputs can steer a system towards a desired goal. We present a new framework based on control theory that characterizes complex, nonlinear control directly from data.
November 26, 2025 at 7:32 PM
2/ Complex systems, including 🧠 brains, 🌲 ecosystems, and 🧬 gene networks, are made of interacting parts. In the brain, different areas coordinate how they interact in different contexts. This is how our attention shifts between our senses, thoughts, and experiences.🖼️🎧💭
November 26, 2025 at 7:32 PM