Alexander Huth
@alexanderhuth.bsky.social
Interested in how & what the brain computes. Professor in Neuroscience & Statistics UC Berkeley
Cortical weight maps were also reasonably correlated between ECoG and fMRI data, at least for the dimensions well-captured in the ECoG coverage.
August 18, 2025 at 6:34 PM
Cortical weight maps were also reasonably correlated between ECoG and fMRI data, at least for the dimensions well-captured in the ECoG coverage.
Finally, we tested whether the same interpretable embeddings could also be used to model ECoG data from Nima Mesgarani's lab. Despite the fact that our features are less well-localized in time than LLM embeddings, this still works quite well!
August 18, 2025 at 6:34 PM
Finally, we tested whether the same interpretable embeddings could also be used to model ECoG data from Nima Mesgarani's lab. Despite the fact that our features are less well-localized in time than LLM embeddings, this still works quite well!
The model and experts were well-aligned, but there were some surprises, like "Does the input include technical or specialized terminology?" (32), which was much more important than expected.
August 18, 2025 at 6:34 PM
The model and experts were well-aligned, but there were some surprises, like "Does the input include technical or specialized terminology?" (32), which was much more important than expected.
"Does the input include dialogue?" (27) has high weights in a smattering of small regions in temporal cortex. And "Does the input contain a negation?" (35) has high weights in anterior temporal lobe and a few prefrontal areas. I think there's a lot of drilling-down we can do here.
August 18, 2025 at 6:34 PM
"Does the input include dialogue?" (27) has high weights in a smattering of small regions in temporal cortex. And "Does the input contain a negation?" (35) has high weights in anterior temporal lobe and a few prefrontal areas. I think there's a lot of drilling-down we can do here.
The fact that each dimension in the embedding thus corresponds to a specific question means that the encoding model weights are interpretable right out-of-the-box. "Does the input describe a visual experience?" has high weight all along the boundary of visual cortex, for example.
August 18, 2025 at 6:34 PM
The fact that each dimension in the embedding thus corresponds to a specific question means that the encoding model weights are interpretable right out-of-the-box. "Does the input describe a visual experience?" has high weight all along the boundary of visual cortex, for example.
But the wilder thing is how we get the embeddings: by just asking LLMs questions. Each theory is cast as a yes/no question. We then have GPT-4 answer each question about each 10-gram in our natural language dataset. We did this for ~600 theories/questions.
August 18, 2025 at 6:34 PM
But the wilder thing is how we get the embeddings: by just asking LLMs questions. Each theory is cast as a yes/no question. We then have GPT-4 answer each question about each 10-gram in our natural language dataset. We did this for ~600 theories/questions.
And it works REALLY well! Prediction performance for encoding models is on a par with uninterpretable Llama3 embeddings! Even with just 35 dimensions!!! I find this fairly wild.
August 18, 2025 at 6:34 PM
And it works REALLY well! Prediction performance for encoding models is on a par with uninterpretable Llama3 embeddings! Even with just 35 dimensions!!! I find this fairly wild.
Our information-theoretic approach, which relies heavily on LLMs to measure mutual information & entropy of text, also explains memory for gist. For what is gist but the information that is shared across an entire narrative? We argue that low sampling rates lead to gist-like recall.
August 1, 2025 at 4:54 PM
Our information-theoretic approach, which relies heavily on LLMs to measure mutual information & entropy of text, also explains memory for gist. For what is gist but the information that is shared across an entire narrative? We argue that low sampling rates lead to gist-like recall.
Excitingly, our model also predicts and explains why we have better memory for event boundaries: boundaries tend to have more shared information! (There's also a really interesting effect of speech rate around event boundaries, more on that in a moment..)
August 1, 2025 at 4:54 PM
Excitingly, our model also predicts and explains why we have better memory for event boundaries: boundaries tend to have more shared information! (There's also a really interesting effect of speech rate around event boundaries, more on that in a moment..)
Using data from a behavioral experiment where participants listened to stories and then recalled them afterwards we found that our model, constant rate uniform information sampling for encoding (CRUISE), explains variation in memory MUCH better than surprisal or other alternative models.
August 1, 2025 at 4:54 PM
Using data from a behavioral experiment where participants listened to stories and then recalled them afterwards we found that our model, constant rate uniform information sampling for encoding (CRUISE), explains variation in memory MUCH better than surprisal or other alternative models.
At the final SfN poster session Aditya Vaidya will present some new work he’s doing w/ me & @libertysays.bsky.social. It sounds a little nuts, but we’re using in silico experiments on fMRI models to replicate effects that really should only work in ECoG. Poster DD15 Wed PM.
November 15, 2023 at 4:03 PM
At the final SfN poster session Aditya Vaidya will present some new work he’s doing w/ me & @libertysays.bsky.social. It sounds a little nuts, but we’re using in silico experiments on fMRI models to replicate effects that really should only work in ECoG. Poster DD15 Wed PM.
s/o to the NeurIPS AC who ignored this insane review and accepted our paper anyway 🫡
(Honestly I've never been hit with a straight up "fMRI isn't real" before, it's certainly interesting)
(Honestly I've never been hit with a straight up "fMRI isn't real" before, it's certainly interesting)
September 25, 2023 at 7:54 PM
s/o to the NeurIPS AC who ignored this insane review and accepted our paper anyway 🫡
(Honestly I've never been hit with a straight up "fMRI isn't real" before, it's certainly interesting)
(Honestly I've never been hit with a straight up "fMRI isn't real" before, it's certainly interesting)
Language centers of the brain, from Star Trek: Strange New Worlds
September 23, 2023 at 1:44 AM
Language centers of the brain, from Star Trek: Strange New Worlds