Daria Lioubashevski
darialioub.bsky.social
Daria Lioubashevski
@darialioub.bsky.social
PhD candidate @ Huji | Student Researcher @Google

Interested in the overlap of mech interp & cog comp neuroscience
Finally, inspired by theories of shared mechanisms between comprehension and production we tested encoding models trained on comprehension data on production (and vice versa) and found that they successfully generalized, preserving rank order and suggesting a shared neural code.
November 11, 2025 at 8:41 AM
Even more surprisingly, although a speaker presumably knows what they’re about to say, we saw similar results in production - the brain still encodes multiple alternatives. This was true even when the actual next word was not among the LLM’s top-3 predictions.
November 11, 2025 at 8:41 AM
Using ECOG recording from language areas (IFG, STG), encoding models trained on static embeddings (e.g. GloVe) of top-ranked LLM predictions sig. predicted neural activity. Going further, averaging embeddings of top-k predictions improved encoding performance up to k = 100! 🤯
November 11, 2025 at 8:41 AM
We found that top-ranked LLMs predictions are both recognized faster in a pre-registered priming experiment and produced with shorter word-gaps in free speech generation, indicating that the brain pre-activates those alternatives. We then turned to neural 🧠 data.
November 11, 2025 at 8:41 AM
🚨 New preprint!
One idea, many ways to say it – but does your brain track those options while you speak?
Using LLMs, we put this to the test.
www.biorxiv.org/content/10.1...
We show for the 1st time that the brain represents multiple alternatives simultaneously in both listening and speaking.
🧵
November 11, 2025 at 8:41 AM