Ching Fang
chingfang.bsky.social
Ching Fang
@chingfang.bsky.social
Postdoc @Harvard interested in neuro-AI and neurotheory. Previously @columbia, @ucberkeley, and @apple. 🧠🧪🤖
In tree mazes, we find a strategy where in-context experience is stitched together to label a critical path from root to goal. If a query state is on this path, an action is chosen to traverse deeper into the tree. If not, the action to go to parent node is optimal. (8/9)
June 26, 2025 at 7:01 PM
Instead, our analysis of the model in gridworld suggests the following strategy: (1) Use in-context experience to align representations to Euclidean space, (2) Given a query state, calculate the angle in Euclidean space to goal, (3) Use the angle to select an action. (7/9)
June 26, 2025 at 7:01 PM
We find a few representation learning strategies: (1) in-context structure learning to form a map of the environment and (2) alignment of representations across contexts with the same structure. These connect to computations suggested in hippocampal-entorhinal cortex. (5/9)
June 26, 2025 at 7:01 PM
As expected, these meta-learned models learn more efficiently in new environments than standard RL since they have useful priors over the task distribution. For instance, models can take shortcut paths in gridworld. So what RL strategies emerged to support this? (4/9)
June 26, 2025 at 7:01 PM
We train transformers to in-context RL (via decision-pretraining from Lee et al 2023) in planning tasks: gridworld and tree mazes (inspired by labyrinth mazes: elifesciences.org/articles/66175). Importantly, each new task has novel sensory observations. (3/9)
June 26, 2025 at 7:01 PM
Why is this useful? We show that place fields + barcode are complementary. Barcodes enable precise recall of cache locations, while place fields enable flexible search for nearby caches. Both are necessary. We also show how barcode memory combines with predictive maps-- check out the paper for more!
March 24, 2025 at 7:46 PM
A memory of a cache is formed by binding place + seed content to the resulting RNN barcode via Hebbian learning. An animal can recall this memory from place inputs (and high recurrent strength in the RNN). These barcodes capture the spatial correlation profile seen in data.
March 24, 2025 at 7:46 PM
We suggest a RNN model of barcode memory. The RNN is initialized with random weights and receives place inputs. When recurrent gain is low, RNN units encode place. With high recurrent strength, the random weights produce sparse + uncorrelated barcodes via chaotic dynamics.
March 24, 2025 at 7:46 PM
We were inspired by @selmaan.bsky.social and Emily Mackevicius' data of neural activity in the hippocampus of food-caching birds during a memory task. Cache events are encoded by barcode activity, which are sparse and uncorrelated patterns. Barcode and place activity coexist in the same population!
March 24, 2025 at 7:46 PM