Martin Hebart
martinhebart.bsky.social
Martin Hebart
@martinhebart.bsky.social
Proud dad, Professor of Computational Cognitive Neuroscience, author of The Decoding Toolbox, founder of http://things-initiative.org
our lab 👉 https://hebartlab.com
Reposted by Martin Hebart
Now in press at Nature Communications!
www.nature.com/articles/s41...
Check it out if you are interested in category selectivity, the organization of visual cortex, and topographic models!
December 21, 2025 at 12:26 PM
Reposted by Martin Hebart
Legit super excited about this work coming out. My amazing doctoral student @ben.graphics has been working on an idea to use physically based differentiable rendering (PBDR) to probe visual understanding. Here, we generate physically-grounded metamers for vision models. 1/4

arxiv.org/abs/2512.12307
December 17, 2025 at 9:17 PM
Ok, this is nuts. Once you see it you cannot unsee it. Do you see it?
(OP @drgbuckingham.bsky.social )
December 16, 2025 at 7:39 PM
Reposted by Martin Hebart
A “universal” pattern of cortical brain oscillations may be less ubiquitous than previously proposed.

By @claudia-lopez.bsky.social

#neuroskyence

www.thetransmitter.org/brain-waves/...
Dispute erupts over universal cortical brain-wave claim
The debate highlights opposing views on how the cortex transmits information.
www.thetransmitter.org
December 12, 2025 at 2:20 PM
Reposted by Martin Hebart
🚨 🆕 Preprint 🚨

How does the brain represent natural images?

Using MEG + multivariate analysis, we disentangle contributions of retinotopy, spatial frequency, shape, and texture

Together, our results reveal how visual features jointly and dynamically support human object recognition.

link 👇
December 13, 2025 at 5:39 PM
Reposted by Martin Hebart
Hopkins Cog Sci is hiring! We have two open faculty positions: one in vision, and one language. Please repost!
December 12, 2025 at 6:18 PM
Reposted by Martin Hebart
@cimcyc.bsky.social is hiring!

SIX postdoc positions are coming up to dive into collaborative projects bridging together psychological science.

Amazing opportunity to boost a postdoc career in a cutting-edge research center with outstanding human teams!
👇🏽
cimcyc.ugr.es/en/informati...
Bridging Fields in Psychology and Neuroscience with Multidisciplinary Collaboration
Strengthening collaboration to encourage novel research connections between scientific areas is central to the CIMCYC - María de Maeztu Unit of Excellence strategy . To encourage this, the CIMCYC has ...
cimcyc.ugr.es
December 9, 2025 at 12:44 PM
Very thoughtful thread on why it matters to compute the right noise ceiling & why communication is so important to prevent this issue from spreading. Kudos to Sam for being so transparent!

In brief:
NC for best R^2 == data reliability expressed as r
NC for best r == sqrt(reliability)
If you calculated noise ceilings (NC) based on split-half reliability - e.g. to compare models - this one is important!
Seems many published studies miscalculated it, overestimating model performance. First, let's make this crystal clear:

NC = 2*r / (1+r)

where r is split-half correlation.
New preprint w/ Malin Styrnal & @martinhebart.bsky.social

Have you ever computed noise ceilings to understand how well a model performs? We wrote a clarifying note on a subtle and common misapplication that can make models appear quite a lot better than they are.

osf.io/preprints/ps...
December 8, 2025 at 2:07 PM
We recently stumbled upon a surprisingly common misunderstanding in computing noise ceilings that can be quite consequential. So if you care about noise ceilings, please check out Sander’s thread and our preprint! 👇
New preprint w/ Malin Styrnal & @martinhebart.bsky.social

Have you ever computed noise ceilings to understand how well a model performs? We wrote a clarifying note on a subtle and common misapplication that can make models appear quite a lot better than they are.

osf.io/preprints/ps...
OSF
osf.io
December 5, 2025 at 8:39 AM
Reposted by Martin Hebart
New preprint w/ Malin Styrnal & @martinhebart.bsky.social

Have you ever computed noise ceilings to understand how well a model performs? We wrote a clarifying note on a subtle and common misapplication that can make models appear quite a lot better than they are.

osf.io/preprints/ps...
OSF
osf.io
December 4, 2025 at 6:53 PM
Reposted by Martin Hebart
Super happy to announce that our Research Training Group "PIMON" is funded by the @dfg.de ! Starting in October, we will have exciting opportunities for PhD students that want to explore object and material perception & interaction in Gießen @jlugiessen.bsky.social ! Just look at this amazing team!
December 3, 2025 at 12:46 PM
Reposted by Martin Hebart
New Correspondence with @davidpoeppel.bsky.social in Nat Rev Neurosci. www.nature.com/articles/s41...

Here, we critique a recent paper by Rosas et al. We argue that "Bottom-up" and "Top-down" neuroscience have various meanings in the literature.

PDF: rdcu.be/eSKYI
Top-down and bottom-up neuroscience as collections of practices - Nature Reviews Neuroscience
Nature Reviews Neuroscience - Top-down and bottom-up neuroscience as collections of practices
www.nature.com
December 2, 2025 at 3:13 PM
Reposted by Martin Hebart
Investigating individual-specific topographic organization has traditionally been a resource-intensive and time-consuming process. But what if we could map visual cortex organization in thousands of brains? Here we offer the community with a toolbox that can do just that! tinyurl.com/deepretinotopy
December 1, 2025 at 11:26 AM
Really excited to see this preprint out! Fernanda did an amazing job at demonstrating how you can accurately predict retinotopy from T1w scans alone. This is important for several reasons: 1/4
Investigating individual-specific topographic organization has traditionally been a resource-intensive and time-consuming process. But what if we could map visual cortex organization in thousands of brains? Here we offer the community with a toolbox that can do just that! tinyurl.com/deepretinotopy
December 1, 2025 at 1:51 PM
Reposted by Martin Hebart
We’d love your feedback on BERG (github.com/gifale95/BERG): pretrained encoding models + a Python toolkit for generating in silico neural responses for in silico experimentation. Your input will make BERG more useful and reliable!

forms.gle/pybrqcaqdso2...

#NeuroAI #CompNeuro #neuroscience #AI
The Brain Encoding Response Generator (BERG) survey
Thank you for taking part in this survey aimed at (anonymously) collecting your thoughts and suggestions on a new resource called the Brain Encoding Response Generator (BERG; https://github.com/gifale...
forms.gle
November 24, 2025 at 3:34 PM
It’s not too late to apply for the PhD position in my lab! Please send your documents (cover letter, CV, transcripts, names of references) through the official application platform by Nov 25!
Please repost! I am looking for a PhD candidate in the area of Computational Cognitive Neuroscience to start in early 2026.

The position is funded as part of the Excellence Cluster "The Adaptive Mind" at @jlugiessen.bsky.social.

Please apply here until Nov 25:
www.uni-giessen.de/de/ueber-uns...
November 24, 2025 at 8:55 AM
Huge congrats to Philipp Kaniuth for successfully defending his PhD summa cum laude (with distinction) “on the measurement of representations and similarity”! Phil was my first PhD candidate, so it’s a particularly special event for me, and he can be very proud of his achievements!
November 20, 2025 at 6:16 PM
Noise ceilings are really useful: You can estimate the reliability of your data and get an index of how well your model can possibly perform given the noise in the data.

But, contrary to what you may think, noise ceilings do not provide an absolute index of data quality.

Let's dive into why. 🧵
November 7, 2025 at 2:58 PM
Please repost! I am looking for a PhD candidate in the area of Computational Cognitive Neuroscience to start in early 2026.

The position is funded as part of the Excellence Cluster "The Adaptive Mind" at @jlugiessen.bsky.social.

Please apply here until Nov 25:
www.uni-giessen.de/de/ueber-uns...
November 4, 2025 at 1:57 PM
Reposted by Martin Hebart
*Neurocomputational architecture for syntax/learning*

Neuroscience & Philo Salon: join our discussion with @elliot-murphy.bsky.social with commentaries by @wmatchin.bsky.social and @sandervanbree.bsky.social
Nov 5, 10:30 am eastern US
Register:
umd.zoom.us/my/luizpesso...
#neuroskyence
October 27, 2025 at 4:52 PM
Reposted by Martin Hebart
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan 🧠💬

1/n
November 3, 2025 at 3:17 PM
Reposted by Martin Hebart
Large-scale similarity ratings of 768 short action videos uncover 28 interpretable dimensions—such as interaction, sport, and craft—offering a framework to quantify and compare human actions.
@martinhebart.bsky.social
www.nature.com/articles/s44...
Revealing Key Dimensions Underlying the Recognition of Dynamic Human Actions - Communications Psychology
Large-scale similarity ratings of 768 short action videos uncover 28 interpretable dimensions—such as interaction, sport, and craft—offering a framework to quantify and compare human actions.
www.nature.com
October 27, 2025 at 9:09 AM
Reposted by Martin Hebart
🚨Preprint: Semantic Tuning of Single Neurons in the Human Medial Temporal Lobe

1/8: How do human neurons encode meaning?
In this work, led by Katharina Karkowski, we recorded hundreds of human MTL neurons to study semantic coding in the human brain:

doi.org/10.1101/2025...
Semantic Tuning of Single Neurons in the Human Medial Temporal Lobe
The Medial Temporal Lobe (MTL) is key to human cognition, supporting memory, emotional processing, navigation, and semantic coding. Rare direct human MTL recordings revealed concept cells, which were ...
doi.org
October 27, 2025 at 3:32 PM
I’m really excited to be part of this collaboration that started with a chat at the poster of @treber.bsky.social and @humansingleneuron.bsky.social at SfN in 2018 (!) Katharina and everyone involved did a really fantastic job at using adaptive sampling to learn about semantic tuning in human MTL.
🚨Preprint: Semantic Tuning of Single Neurons in the Human Medial Temporal Lobe

1/8: How do human neurons encode meaning?
In this work, led by Katharina Karkowski, we recorded hundreds of human MTL neurons to study semantic coding in the human brain:

doi.org/10.1101/2025...
Semantic Tuning of Single Neurons in the Human Medial Temporal Lobe
The Medial Temporal Lobe (MTL) is key to human cognition, supporting memory, emotional processing, navigation, and semantic coding. Rare direct human MTL recordings revealed concept cells, which were ...
doi.org
October 27, 2025 at 7:26 PM
“Revealing Key Dimensions Underlying the Recognition of Dynamic Human Actions”
New work led by Andre Bockes and Angelika Lingnau - with some small support from me - on dimensions underlying the mental representation of dynamic human actions.

www.nature.com/articles/s44...
Revealing Key Dimensions Underlying the Recognition of Dynamic Human Actions - Communications Psychology
Large-scale similarity ratings of 768 short action videos uncover 28 interpretable dimensions—such as interaction, sport, and craft—offering a framework to quantify and compare human actions.
www.nature.com
October 27, 2025 at 7:23 PM