Martin Hebart
@martinhebart.bsky.social
Proud dad, Professor of Computational Cognitive Neuroscience, author of The Decoding Toolbox, founder of http://things-initiative.org
our lab 👉 https://hebartlab.com
our lab 👉 https://hebartlab.com
Noise ceilings are really useful: You can estimate the reliability of your data and get an index of how well your model can possibly perform given the noise in the data.
But, contrary to what you may think, noise ceilings do not provide an absolute index of data quality.
Let's dive into why. 🧵
But, contrary to what you may think, noise ceilings do not provide an absolute index of data quality.
Let's dive into why. 🧵
November 7, 2025 at 2:58 PM
Noise ceilings are really useful: You can estimate the reliability of your data and get an index of how well your model can possibly perform given the noise in the data.
But, contrary to what you may think, noise ceilings do not provide an absolute index of data quality.
Let's dive into why. 🧵
But, contrary to what you may think, noise ceilings do not provide an absolute index of data quality.
Let's dive into why. 🧵
Please repost! I am looking for a PhD candidate in the area of Computational Cognitive Neuroscience to start in early 2026.
The position is funded as part of the Excellence Cluster "The Adaptive Mind" at @jlugiessen.bsky.social.
Please apply here until Nov 25:
www.uni-giessen.de/de/ueber-uns...
The position is funded as part of the Excellence Cluster "The Adaptive Mind" at @jlugiessen.bsky.social.
Please apply here until Nov 25:
www.uni-giessen.de/de/ueber-uns...
November 4, 2025 at 1:57 PM
Please repost! I am looking for a PhD candidate in the area of Computational Cognitive Neuroscience to start in early 2026.
The position is funded as part of the Excellence Cluster "The Adaptive Mind" at @jlugiessen.bsky.social.
Please apply here until Nov 25:
www.uni-giessen.de/de/ueber-uns...
The position is funded as part of the Excellence Cluster "The Adaptive Mind" at @jlugiessen.bsky.social.
Please apply here until Nov 25:
www.uni-giessen.de/de/ueber-uns...
Reposted by Martin Hebart
*Neurocomputational architecture for syntax/learning*
Neuroscience & Philo Salon: join our discussion with @elliot-murphy.bsky.social with commentaries by @wmatchin.bsky.social and @sandervanbree.bsky.social
Nov 5, 10:30 am eastern US
Register:
umd.zoom.us/my/luizpesso...
#neuroskyence
Neuroscience & Philo Salon: join our discussion with @elliot-murphy.bsky.social with commentaries by @wmatchin.bsky.social and @sandervanbree.bsky.social
Nov 5, 10:30 am eastern US
Register:
umd.zoom.us/my/luizpesso...
#neuroskyence
October 27, 2025 at 4:52 PM
*Neurocomputational architecture for syntax/learning*
Neuroscience & Philo Salon: join our discussion with @elliot-murphy.bsky.social with commentaries by @wmatchin.bsky.social and @sandervanbree.bsky.social
Nov 5, 10:30 am eastern US
Register:
umd.zoom.us/my/luizpesso...
#neuroskyence
Neuroscience & Philo Salon: join our discussion with @elliot-murphy.bsky.social with commentaries by @wmatchin.bsky.social and @sandervanbree.bsky.social
Nov 5, 10:30 am eastern US
Register:
umd.zoom.us/my/luizpesso...
#neuroskyence
Reposted by Martin Hebart
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.
tl;dr: you can now chat with a brain scan 🧠💬
1/n
tl;dr: you can now chat with a brain scan 🧠💬
1/n
November 3, 2025 at 3:17 PM
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.
tl;dr: you can now chat with a brain scan 🧠💬
1/n
tl;dr: you can now chat with a brain scan 🧠💬
1/n
Reposted by Martin Hebart
Large-scale similarity ratings of 768 short action videos uncover 28 interpretable dimensions—such as interaction, sport, and craft—offering a framework to quantify and compare human actions.
@martinhebart.bsky.social
www.nature.com/articles/s44...
@martinhebart.bsky.social
www.nature.com/articles/s44...
Revealing Key Dimensions Underlying the Recognition of Dynamic Human Actions - Communications Psychology
Large-scale similarity ratings of 768 short action videos uncover 28 interpretable dimensions—such as interaction, sport, and craft—offering a framework to quantify and compare human actions.
www.nature.com
October 27, 2025 at 9:09 AM
Large-scale similarity ratings of 768 short action videos uncover 28 interpretable dimensions—such as interaction, sport, and craft—offering a framework to quantify and compare human actions.
@martinhebart.bsky.social
www.nature.com/articles/s44...
@martinhebart.bsky.social
www.nature.com/articles/s44...
Reposted by Martin Hebart
🚨Preprint: Semantic Tuning of Single Neurons in the Human Medial Temporal Lobe
1/8: How do human neurons encode meaning?
In this work, led by Katharina Karkowski, we recorded hundreds of human MTL neurons to study semantic coding in the human brain:
doi.org/10.1101/2025...
1/8: How do human neurons encode meaning?
In this work, led by Katharina Karkowski, we recorded hundreds of human MTL neurons to study semantic coding in the human brain:
doi.org/10.1101/2025...
Semantic Tuning of Single Neurons in the Human Medial Temporal Lobe
The Medial Temporal Lobe (MTL) is key to human cognition, supporting memory, emotional processing, navigation, and semantic coding. Rare direct human MTL recordings revealed concept cells, which were ...
doi.org
October 27, 2025 at 3:32 PM
🚨Preprint: Semantic Tuning of Single Neurons in the Human Medial Temporal Lobe
1/8: How do human neurons encode meaning?
In this work, led by Katharina Karkowski, we recorded hundreds of human MTL neurons to study semantic coding in the human brain:
doi.org/10.1101/2025...
1/8: How do human neurons encode meaning?
In this work, led by Katharina Karkowski, we recorded hundreds of human MTL neurons to study semantic coding in the human brain:
doi.org/10.1101/2025...
I’m really excited to be part of this collaboration that started with a chat at the poster of @treber.bsky.social and @humansingleneuron.bsky.social at SfN in 2018 (!) Katharina and everyone involved did a really fantastic job at using adaptive sampling to learn about semantic tuning in human MTL.
🚨Preprint: Semantic Tuning of Single Neurons in the Human Medial Temporal Lobe
1/8: How do human neurons encode meaning?
In this work, led by Katharina Karkowski, we recorded hundreds of human MTL neurons to study semantic coding in the human brain:
doi.org/10.1101/2025...
1/8: How do human neurons encode meaning?
In this work, led by Katharina Karkowski, we recorded hundreds of human MTL neurons to study semantic coding in the human brain:
doi.org/10.1101/2025...
Semantic Tuning of Single Neurons in the Human Medial Temporal Lobe
The Medial Temporal Lobe (MTL) is key to human cognition, supporting memory, emotional processing, navigation, and semantic coding. Rare direct human MTL recordings revealed concept cells, which were ...
doi.org
October 27, 2025 at 7:26 PM
I’m really excited to be part of this collaboration that started with a chat at the poster of @treber.bsky.social and @humansingleneuron.bsky.social at SfN in 2018 (!) Katharina and everyone involved did a really fantastic job at using adaptive sampling to learn about semantic tuning in human MTL.
“Revealing Key Dimensions Underlying the Recognition of Dynamic Human Actions”
New work led by Andre Bockes and Angelika Lingnau - with some small support from me - on dimensions underlying the mental representation of dynamic human actions.
www.nature.com/articles/s44...
New work led by Andre Bockes and Angelika Lingnau - with some small support from me - on dimensions underlying the mental representation of dynamic human actions.
www.nature.com/articles/s44...
Revealing Key Dimensions Underlying the Recognition of Dynamic Human Actions - Communications Psychology
Large-scale similarity ratings of 768 short action videos uncover 28 interpretable dimensions—such as interaction, sport, and craft—offering a framework to quantify and compare human actions.
www.nature.com
October 27, 2025 at 7:23 PM
“Revealing Key Dimensions Underlying the Recognition of Dynamic Human Actions”
New work led by Andre Bockes and Angelika Lingnau - with some small support from me - on dimensions underlying the mental representation of dynamic human actions.
www.nature.com/articles/s44...
New work led by Andre Bockes and Angelika Lingnau - with some small support from me - on dimensions underlying the mental representation of dynamic human actions.
www.nature.com/articles/s44...
Reposted by Martin Hebart
1/Preprint Alert🔔: Across two experiments plus a computational model, we show the visual system compresses complex scenes into summary statistics that can guide behavior without conscious access to the task-defining features. We term this the Ensemble Blindsight effect.
September 28, 2025 at 7:49 PM
1/Preprint Alert🔔: Across two experiments plus a computational model, we show the visual system compresses complex scenes into summary statistics that can guide behavior without conscious access to the task-defining features. We term this the Ensemble Blindsight effect.
Reposted by Martin Hebart
🚨Our preprint is online!🚨
www.biorxiv.org/content/10.1...
How do #dopamine neurons perform the key calculations in reinforcement #learning?
Read on to find out more! 🧵
www.biorxiv.org/content/10.1...
How do #dopamine neurons perform the key calculations in reinforcement #learning?
Read on to find out more! 🧵
September 19, 2025 at 1:05 PM
🚨Our preprint is online!🚨
www.biorxiv.org/content/10.1...
How do #dopamine neurons perform the key calculations in reinforcement #learning?
Read on to find out more! 🧵
www.biorxiv.org/content/10.1...
How do #dopamine neurons perform the key calculations in reinforcement #learning?
Read on to find out more! 🧵
Reposted by Martin Hebart
🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.
Paper: arxiv.org/pdf/2509.08825
Paper: arxiv.org/pdf/2509.08825
September 12, 2025 at 10:33 AM
🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.
Paper: arxiv.org/pdf/2509.08825
Paper: arxiv.org/pdf/2509.08825
I wanted to add some thoughts to this excellent blog post, not detailed, maybe wrong, maybe useful:
1. Unique variance is easy to interpret as a lower bound of what a variable explains (the upper bound being either what the variable explains alone or what the other variables cannot explain uniquely)
1. Unique variance is easy to interpret as a lower bound of what a variable explains (the upper bound being either what the variable explains alone or what the other variables cannot explain uniquely)
Variance partitioning is used to quantify the overlap of two models. Over the years, I have found that this can be a very confusing and misleading concept. So we finally we decided to write a short blog to explain why.
@martinhebart.bsky.social @gallantlab.org
diedrichsenlab.org/BrainDataSci...
@martinhebart.bsky.social @gallantlab.org
diedrichsenlab.org/BrainDataSci...
September 12, 2025 at 1:57 PM
I wanted to add some thoughts to this excellent blog post, not detailed, maybe wrong, maybe useful:
1. Unique variance is easy to interpret as a lower bound of what a variable explains (the upper bound being either what the variable explains alone or what the other variables cannot explain uniquely)
1. Unique variance is easy to interpret as a lower bound of what a variable explains (the upper bound being either what the variable explains alone or what the other variables cannot explain uniquely)
Reposted by Martin Hebart
🧠 New preprint: Why do deep neural networks predict brain responses so well?
We find a striking dissociation: it’s not shared object recognition. Alignment is driven by sensitivity to texture-like local statistics.
📊 Study: n=57, 624k trials, 5 models doi.org/10.1101/2025...
We find a striking dissociation: it’s not shared object recognition. Alignment is driven by sensitivity to texture-like local statistics.
📊 Study: n=57, 624k trials, 5 models doi.org/10.1101/2025...
September 8, 2025 at 6:32 PM
🧠 New preprint: Why do deep neural networks predict brain responses so well?
We find a striking dissociation: it’s not shared object recognition. Alignment is driven by sensitivity to texture-like local statistics.
📊 Study: n=57, 624k trials, 5 models doi.org/10.1101/2025...
We find a striking dissociation: it’s not shared object recognition. Alignment is driven by sensitivity to texture-like local statistics.
📊 Study: n=57, 624k trials, 5 models doi.org/10.1101/2025...
Reposted by Martin Hebart
We’re hiring! 🥳fully funded PhD and postdoc positions in human cognitive neuroscience @mpicybernetics.bsky.social
M/EEG and brain stimulation methods to study timing/ prediction/ attention/ oscillations
tinyurl.com/5dku4du9
@timingresforum.bsky.social @gtc-tuebingen.bsky.social
please repost 🙏
M/EEG and brain stimulation methods to study timing/ prediction/ attention/ oscillations
tinyurl.com/5dku4du9
@timingresforum.bsky.social @gtc-tuebingen.bsky.social
please repost 🙏
PhD and Postdoc Positions (m/f/d) in Human Cognitive Neuroscience of Dynamic Cognition – M/EEG, brain stimulation
Job Offer from August 11, 2025
tinyurl.com
August 20, 2025 at 9:10 AM
We’re hiring! 🥳fully funded PhD and postdoc positions in human cognitive neuroscience @mpicybernetics.bsky.social
M/EEG and brain stimulation methods to study timing/ prediction/ attention/ oscillations
tinyurl.com/5dku4du9
@timingresforum.bsky.social @gtc-tuebingen.bsky.social
please repost 🙏
M/EEG and brain stimulation methods to study timing/ prediction/ attention/ oscillations
tinyurl.com/5dku4du9
@timingresforum.bsky.social @gtc-tuebingen.bsky.social
please repost 🙏
Pretty amazing how you can learn something in books and in school and it explains everything around you - how the planets move, etc. But then it’s just next level when you can see it is real, with your own eyes.
September 7, 2025 at 7:25 PM
Pretty amazing how you can learn something in books and in school and it explains everything around you - how the planets move, etc. But then it’s just next level when you can see it is real, with your own eyes.
Reposted by Martin Hebart
Huge congratulations 🥳 to
@sofievalk.bsky.social
&
@reznikdan.bsky.social
for receiving ERC Starting Grants 2025! We're proud of you!
Learn more about their projects: www.cbs.mpg.de/2397348/2025...
@sofievalk.bsky.social
&
@reznikdan.bsky.social
for receiving ERC Starting Grants 2025! We're proud of you!
Learn more about their projects: www.cbs.mpg.de/2397348/2025...
Sofie Valk and Daniel Reznik receive ERC Starting Grants
Sofie Valk and Daniel Reznik receive ERC Starting Grants
www.cbs.mpg.de
September 5, 2025 at 2:55 PM
Huge congratulations 🥳 to
@sofievalk.bsky.social
&
@reznikdan.bsky.social
for receiving ERC Starting Grants 2025! We're proud of you!
Learn more about their projects: www.cbs.mpg.de/2397348/2025...
@sofievalk.bsky.social
&
@reznikdan.bsky.social
for receiving ERC Starting Grants 2025! We're proud of you!
Learn more about their projects: www.cbs.mpg.de/2397348/2025...
Reposted by Martin Hebart
🚀Excited to share our project: Canonical Representational Mapping for Cognitive Neuroscience. @schottdorflab.bsky.social and I propose a novel multivariate method to isolate neural representations aligned with specific cognitive hypotheses 🧵https://www.biorxiv.org/content/10.1101/2025.09.01.673485v1
September 5, 2025 at 4:18 PM
🚀Excited to share our project: Canonical Representational Mapping for Cognitive Neuroscience. @schottdorflab.bsky.social and I propose a novel multivariate method to isolate neural representations aligned with specific cognitive hypotheses 🧵https://www.biorxiv.org/content/10.1101/2025.09.01.673485v1
Reposted by Martin Hebart
Launched in 2023, Imaging Neuroscience is now firmly established, with full indexing (PubMed, etc.) and 700 papers to date.
We're very happy to announce that we are able to reduce the APC to $1400.
Huge thanks to all authors, reviewers, editorial team+board, and MIT Press.
We're very happy to announce that we are able to reduce the APC to $1400.
Huge thanks to all authors, reviewers, editorial team+board, and MIT Press.
September 5, 2025 at 2:59 AM
Launched in 2023, Imaging Neuroscience is now firmly established, with full indexing (PubMed, etc.) and 700 papers to date.
We're very happy to announce that we are able to reduce the APC to $1400.
Huge thanks to all authors, reviewers, editorial team+board, and MIT Press.
We're very happy to announce that we are able to reduce the APC to $1400.
Huge thanks to all authors, reviewers, editorial team+board, and MIT Press.
Today I had a curious encounter with my 4-yo son. He told me he discovered that his Batman action figure could switch the Batman logo to something else. He showed me, touched its arm, shook it and said: “there, it changed.”
The thing is: the logo is fixed and cannot change. So what had happened?
The thing is: the logo is fixed and cannot change. So what had happened?
September 4, 2025 at 3:40 PM
Today I had a curious encounter with my 4-yo son. He told me he discovered that his Batman action figure could switch the Batman logo to something else. He showed me, touched its arm, shook it and said: “there, it changed.”
The thing is: the logo is fixed and cannot change. So what had happened?
The thing is: the logo is fixed and cannot change. So what had happened?
Reposted by Martin Hebart
Our target discussion article out in Cognitive Neuroscience! It will be followed by peer commentary and our responses. If you would like to write a commentary, please reach out to the journal! 1/18 www.tandfonline.com/doi/full/10.... @cibaker.bsky.social @susanwardle.bsky.social
August 29, 2025 at 6:43 PM
Our target discussion article out in Cognitive Neuroscience! It will be followed by peer commentary and our responses. If you would like to write a commentary, please reach out to the journal! 1/18 www.tandfonline.com/doi/full/10.... @cibaker.bsky.social @susanwardle.bsky.social
Really happy that PhD candidate Malin Styrnal in our lab won both the Best Student Poster Award *and* the Poster of the Day Award at #ECVP2025 for her presentation "The similarity of similarity tasks: Comparing eight different measures of similarity"! (unfortunately no photo with her!)
Go Malin! 🥳
Go Malin! 🥳
August 29, 2025 at 3:21 PM
Really happy that PhD candidate Malin Styrnal in our lab won both the Best Student Poster Award *and* the Poster of the Day Award at #ECVP2025 for her presentation "The similarity of similarity tasks: Comparing eight different measures of similarity"! (unfortunately no photo with her!)
Go Malin! 🥳
Go Malin! 🥳
Reposted by Martin Hebart
Last day of #ECVP tomorrow (Thursday). And at 8.30 I’ll be introducing what promises to be a coherent and lively symposium on how visual representations are held in mind. In the Audimax, with Maria Servetnik, @tbchristophel.bsky.social, Clay Curtis, and @bradpostle.bsky.social
August 27, 2025 at 8:35 PM
Last day of #ECVP tomorrow (Thursday). And at 8.30 I’ll be introducing what promises to be a coherent and lively symposium on how visual representations are held in mind. In the Audimax, with Maria Servetnik, @tbchristophel.bsky.social, Clay Curtis, and @bradpostle.bsky.social
Reposted by Martin Hebart
The rumors are true! #CCN2026 will be held at NYU. @toddgureckis.bsky.social and I will be executive-chairing. Get in touch if you want to be involved!
August 15, 2025 at 4:43 PM
The rumors are true! #CCN2026 will be held at NYU. @toddgureckis.bsky.social and I will be executive-chairing. Get in touch if you want to be involved!
Very much looking forward to #CCN2025! Would love to see you at our lab's talks and posters, and meet me at the panel discussion in the Algonauts session on Wednesday!
August 11, 2025 at 10:52 AM
Very much looking forward to #CCN2025! Would love to see you at our lab's talks and posters, and meet me at the panel discussion in the Algonauts session on Wednesday!
Reposted by Martin Hebart
looking forward to seeing everyone at #CCN2025! here's a snapshot of the work from my lab that we'll be presenting on speech neuroscience 🧠 ✨
August 10, 2025 at 6:09 PM
looking forward to seeing everyone at #CCN2025! here's a snapshot of the work from my lab that we'll be presenting on speech neuroscience 🧠 ✨