We fed it true image features instead of predicted ones.
The outputs were semantically similar—but perceptually quite different.
It seems the Generator relies mainly on semantic features, with less focus on perceptual fidelity.
We fed it true image features instead of predicted ones.
The outputs were semantically similar—but perceptually quite different.
It seems the Generator relies mainly on semantic features, with less focus on perceptual fidelity.
Careful identification analyses revealed a fundamental limitation in generalizing beyond the training distribution.
Translator, though a regressor, behaves more like a classifier.
Careful identification analyses revealed a fundamental limitation in generalizing beyond the training distribution.
Translator, though a regressor, behaves more like a classifier.
- distinct clusters (~40)
- substantial overlap between training and test sets
NSD test images were also perceptually similar to training images (B), unlike in carefully curated Deeprecon (C).
- distinct clusters (~40)
- substantial overlap between training and test sets
NSD test images were also perceptually similar to training images (B), unlike in carefully curated Deeprecon (C).
The Translator maps brain activity to the Latent features, and the Generator converts those features into images.
We analyzed each component in detail.
The Translator maps brain activity to the Latent features, and the Generator converts those features into images.
We analyzed each component in detail.
They worked well on NSD (A), but performance severely dropped on Deeprecon (B).
The latest MindEye2 even generated training-set categories unrelated to test stimuli.
So what’s behind this generalization failure?
They worked well on NSD (A), but performance severely dropped on Deeprecon (B).
The latest MindEye2 even generated training-set categories unrelated to test stimuli.
So what’s behind this generalization failure?