Tom George
banner
tomnotgeorge.bsky.social
Tom George
@tomnotgeorge.bsky.social
Neuroscience/ML PhD @UCL
• NeuroAI, navigation, hippocampus, ...
• Open-source software tools for science (https://github.com/RatInABox-Lab/RatInABox)
• Co-organiser of TReND CaMinA summer school

🔎👀 for a postdoc position…
woah, are my retinas working, or is that one character away from #RatInABox 👀...

@dlevenstein.bsky.social is right, could be time for a colab
October 6, 2025 at 2:53 PM
Hey Antonio, great list! Please could you add me too? I consider myself firmly in this space 😁
scholar.google.com/citations?hl...

thanks!
Tom M George
‪PhD, University College London‬ - ‪‪Cited by 124‬‬ - ‪Machine learning‬ - ‪Theoretical neuroscience‬
scholar.google.com
December 27, 2024 at 12:20 PM
Thats great to hear, reach out if you run into any problems!
November 26, 2024 at 12:26 AM
Great question. Local optima will always be hard to identify. Ofc if you have a reason to believe behaviour really _isn't_ a good initialisation then you shouldn't use it.

You can always / we already track the log-likelihood of held-out spikes. If this increases then things are looking good.
November 25, 2024 at 4:43 PM
the sky is bluer*
November 25, 2024 at 1:56 PM
you were right though....the grass is greener over here ;)
November 25, 2024 at 1:53 PM
At the risk of rambling I'll end the thread here and perhaps do a deeper dive in the future. Give it a read (or better, try it on your data) and let us know your thoughts!

tomge.org/papers/simpl/

21/21
SIMPL: Scalable and hassle-free optimisation of neural representations from behaviour
An efficient technique for optimising tuning curves starting from behaviour by iteratively refitting the tuning curves and redecoding the latent variables.
tomge.org
November 25, 2024 at 1:39 PM
This isn’t cheating, behaviour has always been there for the taking and we should exploit it (many techniques specialise in joint behavioural-neural analysis). If we ignore behaviour SIMPL still works but the latent space isn’t smooth and “identifiable”...certainly something to consider.

20/21
November 25, 2024 at 1:39 PM
Initialising at behaviour is a powerful trick here. In many regions (e.g., but not limited to, hippocampus 👀), a behavioural correlate (position👀) exists which is VERY CLOSE to the true latent. Starting right next to the global maxima help makes optimisation straightforward.
November 25, 2024 at 1:39 PM
These non-local dynamics aren’t a new discovery by any means but this is, in our opinion, the correct and quickest way to find them.

18/21
November 25, 2024 at 1:39 PM
And there’s cool stuff in the optimised latent too. It mostly tracks behaviour (hippocampus is still mostly a cognitive map) but does occasional big jumps as though the animal is contemplating another location in the environment.

17/21
November 25, 2024 at 1:39 PM
Dubious analogy: Using behaviour alone to study neural representations (status quo for hippocampus) is like wearing mittens and trying to a figure out the shape of a delicate statue in the dark. Everything is blurred.

16/21
November 25, 2024 at 1:39 PM
The old paradigm of “just smooth spikes against position” is wrong! Those aren’t tuning curves in a causal sense…they’re just smoothed spikes. These “real” tuning curves (the output of an algorithm like SIMPL) are the ones we should be analysing/theorising about.

15/21
November 25, 2024 at 1:39 PM
It’s quite a sizeable effect. The median place cell has 23% more place fields...the median place field is 34% smaller and has a firing rate 45% higher. It’s hard to overstate this result…

14/21
November 25, 2024 at 1:39 PM
When applied to a similarly large (but now real) hippocampal dataset SIMPL optimises the tuning curves. “Real” place fields, it turns out, are much smaller, sharper, more numerous and more uniformly-distributed than previously thought.

13/21
November 25, 2024 at 1:39 PM
SIMPL outperforms CEBRA — a contemporary, more general-purpose, neural-net-based technique — in terms of performance and compute-time. It’s over 30x faster. It also beats pi-VAE and GPLVM.

12/21
November 25, 2024 at 1:39 PM
Let’s test SIMPL: We make artificial grid cell data and add noise to the position (latent) variable. This noise blurs the grid fields out of recognition. Apply SIMPL and you recover a perfect estimate of the true trajectory and grid fields in a handful of compute-seconds.

11/21
November 25, 2024 at 1:39 PM
I think this gif explains it well. The animal is "thinking" of the green location but located at the yellow. Spikes plotted against green give sharp grid fields but against yellow are blurred.

In the brain this discrepancy will be caused by replay, planning, uncertainty and more.
November 25, 2024 at 1:39 PM
behaviour =/= latent.

This is obvious in non-navigational regions. But for HPC/MEC/etc. it’s definitely often overlooked…behaviour alone explains the spikes SO well (read: grid cells look pretty) it’s common to just stop there. But that leaves some error

9/21
November 25, 2024 at 1:39 PM
In order to know the “true” tuning curves we need to know the “true” latent which passed through those curves to generate spikes. i.e. what was the animal thinking of…not what was the animal doing. This latent, of course, is often close to a behavioural readout such as position
November 25, 2024 at 1:39 PM
So what’s the idea inspiring this? Basically, tuning curves (defined as plotting spikes against behaviour) aren’t the brains “real” tuning curves in any causal sense. But often we analyse and theorise about them as though they are. That's a problem.

7/21
November 25, 2024 at 1:39 PM