Jascha Achterberg
banner
achterbrain.bsky.social
Jascha Achterberg
@achterbrain.bsky.social
Neuroscience & AI at University of Oxford and University of Cambridge | Principles of efficient computations + learning in brains, AI, and silicon 🧠 NeuroAI | Gates Cambridge Scholar

www.jachterberg.com
*Hierarchical choice code*
Temporal cross-correlation analysis revealed hierarchical coding of problem structure across all regions, with most regions being driven by temporal similarity of time windows across choices, with vlPFC also responding for the repeated order of operations across choices.
May 29, 2025 at 9:55 AM
*Move codes*
Next, analysing dynamics in the "Move space", we found: Move coding develops first in vlPFC before reaching dPM; other regions showed weaker move coding. "Move space" generally orthogonal to "Goal space" (except in dmPFC).
May 29, 2025 at 9:55 AM
*Goal and location codes*
We projected neural activity into the "Goal space" & measured distances between projections grouped by current position vs. goal. We saw regional specialization: vlPFC driven by location; dmPFC more driven by goal (maintained throughout trial); dPM & I/O with mixed code.
May 29, 2025 at 9:55 AM
*Analysing neural subspaces*
To understand frontal cortex computations, we identified neural subspaces in relation to key variables and studied the population dynamics over the duration of the trial. For each region, we asked: Which variables drive the shape & dynamics of projections?
May 29, 2025 at 9:55 AM
*Recordings*
We recorded 1374 neurons from four frontal regions with semi-chronic microelectrode arrays: ventrolateral, dorsal premotor, dorsomedial, insula/orbitofrontal. This wide coverage across frontal cortex in the same complex task makes this a unique dataset in monkey electrophysiology.
May 29, 2025 at 9:55 AM
*Routes through the maze*
Depending on available choice options, goal locations can be reached via 2-step or 4-step routes.
May 29, 2025 at 9:55 AM
*Task*
Monkeys solve a multi-step maze task which requires them to navigate a 2D grid from the start location to 1 of 4 goal locations, using saccades. Monkeys start at the center, need to remember the goal location (presented at the start of the trial), and navigate using presented choice options.
May 29, 2025 at 9:55 AM
Plenty of training programs now put a focus on Open Science. We find that our respondents already show strong experience with open source and open sharing of science!
November 8, 2024 at 8:31 AM
We do find that the training needs of respondents are extremely heterogenous, with both methods and theory across both neuro and AI being the gap that some group of respondents wanted to close in their own education.
November 8, 2024 at 8:31 AM
What surprised us was that most respondents hoped to find a position that allows to combine academia with industry but only few thought they were likely to achieve their goal. This might inform both how we plan courses but also how we think about hiring in the future!
November 8, 2024 at 8:30 AM
There’s been some discussion in the community whether NeuroAI should be about “Neuro to AI”-transfer or vice versa. We find that both are topics of great interest!
November 8, 2024 at 8:30 AM
First of all, why are students motivated to work on the intersection of neuroscience and AI in the first place? Is it just a career move, based on AI being a hot topic? No! We find curiosity is much more important than career-focused drivers.
November 8, 2024 at 8:29 AM
Here Andrea Luppi and I + lots of collaborators (see below) survey current trainees on the intersection of AI / neuro to learn how trainees think they could be best supported! Some highlighted findings in thread, with lots more in the paper:
November 8, 2024 at 8:29 AM
Last year Zador et al called to train a new generation of researchers, equally at home in CompSci & #neuroscience to accelerate our understanding of the nature of intelligence, but how can we get this interdisciplinary training right? 🧵
#MLsky #NeuroAI #CompNeuro

www.nature.com/articles/s41...
November 8, 2024 at 8:26 AM
Join our ARIA-funded project as a postdoc on brain-inspired computing 🤖🧠, at Imperial College London! Super exciting opportunity connecting both fundamental research and the creation of cutting-edge technologies!
#neuroscience #MLsky #NeuroAI #CompNeuro

www.imperial.ac.uk/jobs/search-...
September 9, 2024 at 11:25 AM
Starting next week I will be in the US 🇺🇸, until mid AUG. First in New York, later SF & Boston! If you want to meet to discuss (or know of a meet-up on) efficient AI, dynamic representations, NeuroAI, hardware accelerators, & neuromorphic, please reach out!

#neuroscience #MLsky #NeuroAI #CompNeuro
June 3, 2024 at 2:00 PM
PyTorch user guides offering advice for some very specific life situations --

nothing like a little hidden side comment to remind us all that these documentations and guides are all written by the community!
via pytorch.org/tutorials/ad...

#MLSky #compneuro #neuroAI #deeplearning
May 2, 2024 at 9:14 AM
Amazing looking preprint from Tim Buschman's group:

'Building compositional tasks with shared neural subspaces'
www.biorxiv.org/content/10.1...

#neuroscience #compneuro
February 1, 2024 at 11:08 AM
🪢 Lastly, we find that all these findings arise in unison in seRNNs, highlighting that this diverse set of seemingly unrelated brain features can result from a shared underlying optimisation process!
November 20, 2023 at 4:42 PM
⚡️ While the connectome of seRNNs is modular & the neurons are spatially organised by their code, we find that seRNNs still show an information-rich mixed selective code, that also is commonly found in frontal cortex. seRNNs achieve this using very efficient neuronal activations!
November 20, 2023 at 4:41 PM
📡 To use our model system to jointly study structural and functional phenomena, we also analyse the code used by neurons in the network. Similar to brains, we find that information communication & processing in seRNNs is spatially structured!
November 20, 2023 at 4:41 PM
🕸️ Looking at the network structure more specifically, we see that seRNNs develop a highly modular connectome with strong small-world characteristics. These features are commonly found in biology and thought to guide efficient information processing!
November 20, 2023 at 4:40 PM
🧠 Even in the simplest observation we already see that an isolated seRNN develops connection patterns commonly observed in brains – here we see a relationship between connection strength and spatial distance and clustering of connections in the connectome!
November 20, 2023 at 4:40 PM
🔧 All our tests of spatially embedded RNNs (we call them seRNNs) are made in comparison with standard L1 regularised models, matched for sparsity. We train a large batch of networks with varying regularisation strengths.
November 20, 2023 at 4:40 PM
🌐 Starting from standard RNNs, optimised for a working memory task, we create a spatial embedding by regularising them by distance in 3D Euclidean space & nudge them to prioritise highly communicative connections (Topological, lots of theory on this in paper!)
November 20, 2023 at 4:39 PM