Victoria Bosch
@initself.bsky.social
neuromantic - ML and cognitive computational neuroscience - PhD student at Kietzmann Lab, Osnabrück University.
⛓️ https://init-self.com
⛓️ https://init-self.com
Thanks! We’ll put the code and chat interface out soon :)
November 4, 2025 at 4:27 PM
Thanks! We’ll put the code and chat interface out soon :)
Congratulations!!
November 4, 2025 at 1:52 PM
Congratulations!!
Thanks to all coauthors! 🦾
@anthesdaniel.bsky.social
@adriendoerig.bsky.social
@sushrutthorat.bsky.social
Peter König
@timkietzmann.bsky.social
/fin
@anthesdaniel.bsky.social
@adriendoerig.bsky.social
@sushrutthorat.bsky.social
Peter König
@timkietzmann.bsky.social
/fin
November 3, 2025 at 3:17 PM
Thanks to all coauthors! 🦾
@anthesdaniel.bsky.social
@adriendoerig.bsky.social
@sushrutthorat.bsky.social
Peter König
@timkietzmann.bsky.social
/fin
@anthesdaniel.bsky.social
@adriendoerig.bsky.social
@sushrutthorat.bsky.social
Peter König
@timkietzmann.bsky.social
/fin
We are convinced that these results mark a shift from static neural decoding toward interactive, generative brain-language interfaces.
Preprint: www.arxiv.org/abs/2509.23941
Preprint: www.arxiv.org/abs/2509.23941
Brain-language fusion enables interactive neural readout and in-silico experimentation
Large language models (LLMs) have revolutionized human-machine interaction, and have been extended by embedding diverse modalities such as images into a shared language space. Yet, neural decoding has...
www.arxiv.org
November 3, 2025 at 3:17 PM
We are convinced that these results mark a shift from static neural decoding toward interactive, generative brain-language interfaces.
Preprint: www.arxiv.org/abs/2509.23941
Preprint: www.arxiv.org/abs/2509.23941
CorText also responds to in-silico microstimulations in line with experimental predictions: For example, when amplifying face-selective voxels for trials where no people were shown to the participant, CorText starts hallucinating them. With inhibition we can "remove people”. 7/n
November 3, 2025 at 3:17 PM
CorText also responds to in-silico microstimulations in line with experimental predictions: For example, when amplifying face-selective voxels for trials where no people were shown to the participant, CorText starts hallucinating them. With inhibition we can "remove people”. 7/n
Following Shirakawa et al. (2025), we test zero-shot neural decoding: When entire semantic categories (e.g., zebras, surfers, airplanes) are withheld during training, the model can still give meaningful descriptions of the visual content. 6/n
November 3, 2025 at 3:17 PM
Following Shirakawa et al. (2025), we test zero-shot neural decoding: When entire semantic categories (e.g., zebras, surfers, airplanes) are withheld during training, the model can still give meaningful descriptions of the visual content. 6/n
What can we do with it? For example, we can have CorText answer questions about a visual scene (“What’s in this image?” “How many people are there”?) that a person saw while in an fMRI scanner. CorText never sees the actual image, only the brain scan. 5/n
November 3, 2025 at 3:17 PM
What can we do with it? For example, we can have CorText answer questions about a visual scene (“What’s in this image?” “How many people are there”?) that a person saw while in an fMRI scanner. CorText never sees the actual image, only the brain scan. 5/n
By moving neural data into LLM token space, we gain open-ended, linguistic access to brain scans as experimental probes. At the same time, this has the potential to unlock many additional downstream capabilities (think reasoning, in-context learning, web-search, etc). 4/n
November 3, 2025 at 3:17 PM
By moving neural data into LLM token space, we gain open-ended, linguistic access to brain scans as experimental probes. At the same time, this has the potential to unlock many additional downstream capabilities (think reasoning, in-context learning, web-search, etc). 4/n
To accomplish this, CorText fuses fMRI data into the latent space of an LLM, turning neural signal into tokens that the model can reason about in response to questions. This sets it apart from existing decoding techniques, which map brain data into static embeddings/output. 3/n
November 3, 2025 at 3:17 PM
To accomplish this, CorText fuses fMRI data into the latent space of an LLM, turning neural signal into tokens that the model can reason about in response to questions. This sets it apart from existing decoding techniques, which map brain data into static embeddings/output. 3/n
Generative language models are revolutionizing human-machine interaction. Importantly, such systems can now reason cross-modally (e.g. vision-language models). Can we do the same with neural data - i.e., can we build brain-language models with comparable flexibility? 2/n
November 3, 2025 at 3:17 PM
Generative language models are revolutionizing human-machine interaction. Importantly, such systems can now reason cross-modally (e.g. vision-language models). Can we do the same with neural data - i.e., can we build brain-language models with comparable flexibility? 2/n
In his article “Mysterium Iniquitatis of Sinful Man Aspiring into the Place of God” which is a very sane title (contra its contents ofc)
October 20, 2025 at 8:40 PM
In his article “Mysterium Iniquitatis of Sinful Man Aspiring into the Place of God” which is a very sane title (contra its contents ofc)
Congratulations! :)
September 5, 2025 at 6:12 AM
Congratulations! :)