Ken Shirakawa
banner
kencan7749.bsky.social
Ken Shirakawa
@kencan7749.bsky.social
Ph.D. candidate in Kyoto university and ATR/ Brain decoding / fMRI / neuroAI / neuroscience
This project wouldn’t have been done without the support of all our lab members.
Huge thanks to co-authors, and especially to Prof. Kamitani ( @ykamit.bsky.social), for their invaluable support throughout this work!
June 13, 2025 at 9:23 AM
Our paper goes further to formal analysis —including mathematical analysis, simulations, analysis of AI model representations, evaluation pitfalls, and meta-level insights into “realistic” reconstruction.

If this thread sparked your interest, please take a look at our paper!
June 13, 2025 at 9:22 AM
NSD’s image diversity is smaller than expected, but this doesn’t diminish its value. New datasets like NSD-synthetic (arxiv.org/abs/2503.06286) and NSD-imagery (www.arxiv.org/abs/2506.06898) will also be valuable. Yet, we should consider data splits that align with your research goals.
A 7T fMRI dataset of synthetic images for out-of-distribution modeling of vision
Large-scale visual neural datasets such as the Natural Scenes Dataset (NSD) are boosting NeuroAI research by enabling computational models of the brain with performances beyond what was possible just ...
arxiv.org
June 13, 2025 at 9:22 AM
So, how should we interpret these reconstruction methods? We argue they’re better understood as visualizations of decoded content, not true reconstructions.
Visualization itself also has value, but it’s crucial to recognize the huge gap between visualization and reconstruction.
June 13, 2025 at 9:21 AM
Taken together, our results suggest recent diffusion-based reconstructions are a mix of classification into trained categories and hallucination by generative AIs.
This deviates fundamentally from genuine visual reconstruction, which aims to recover arbitrary visual experiences.
June 13, 2025 at 9:21 AM
What about the Generator (diffusion model)?
We fed it true image features instead of predicted ones.
The outputs were semantically similar—but perceptually quite different.
It seems the Generator relies mainly on semantic features, with less focus on perceptual fidelity.
June 13, 2025 at 9:21 AM
Given the overlap between training/test sets, can the Translator predict test stimuli effectively?

Careful identification analyses revealed a fundamental limitation in generalizing beyond the training distribution.

Translator, though a regressor, behaves more like a classifier.
June 13, 2025 at 9:21 AM
We first check Latent features. UMAP visualization of NSD’s CLIP features revealed (A):

- distinct clusters (~40)
- substantial overlap between training and test sets

NSD test images were also perceptually similar to training images (B), unlike in carefully curated Deeprecon (C).
June 13, 2025 at 9:20 AM
To better understand what was happening, we decomposed these methods into a Translator–Generator pipeline.

The Translator maps brain activity to the Latent features, and the Generator converts those features into images.

We analyzed each component in detail.
June 13, 2025 at 9:19 AM
We tested whether these methods generalize beyond NSD.
They worked well on NSD (A), but performance severely dropped on Deeprecon (B).
The latest MindEye2 even generated training-set categories unrelated to test stimuli.
So what’s behind this generalization failure?
June 13, 2025 at 9:18 AM
“Reconstruction” is often seen as recovering any instance from a space of interest.

Prior works (e.g., Miyawaki+ 2008, Shen+ 2019) pursued this goal.

Recent studies report realistic reconstructions from NSD using CLIP + diffusion models.

But—do they truly achieve this goal?
June 13, 2025 at 9:18 AM
Reposted by Ken Shirakawa
One big issue with some of the previous claims are that NSD, the massive 7T fMRI dataset of 1000s of images, might not be the right dataset to test these hypotheses. The reason is that it is built on MSCoCo and has too high similarity between training and test. arxiv.org/abs/2405.10078 16/n
arxiv.org
December 11, 2024 at 10:18 PM