https://raj-magesh.org
An relevant paper along these lines is www.nature.com/articles/nat..., where they show dimensionality collapse on error trials in monkey PFC representations!
An relevant paper along these lines is www.nature.com/articles/nat..., where they show dimensionality collapse on error trials in monkey PFC representations!
Yes, several prior reports of low-D representations were because of deliberate constraints to measure behavioral relevance. Here, we only consider cross-trial/cross-subject reliability, not task-related constraints (a very interesting Q in its own right).
Yes, several prior reports of low-D representations were because of deliberate constraints to measure behavioral relevance. Here, we only consider cross-trial/cross-subject reliability, not task-related constraints (a very interesting Q in its own right).
And how easy it is to contribute bugfixes.
Even if the frequency of bugs is higher, the total annoyance is much lower, perhaps because I feel like I have agency.
Long live FOSS!
And how easy it is to contribute bugfixes.
Even if the frequency of bugs is higher, the total annoyance is much lower, perhaps because I feel like I have agency.
Long live FOSS!
But my point is simpler: I think neuroscience experiments often yield low-D manifolds because of simplicity in inputs (e.g. carefully controlled stimuli) and easy tasks. I expect naturalistic stimuli and behaviors would elicit more high-D representations.
But my point is simpler: I think neuroscience experiments often yield low-D manifolds because of simplicity in inputs (e.g. carefully controlled stimuli) and easy tasks. I expect naturalistic stimuli and behaviors would elicit more high-D representations.
Our point in this paper is mainly that the absolute dimensionality is much higher than previously thought throughout visual cortex! And so we might need different approaches to understand these high-D data.
Our point in this paper is mainly that the absolute dimensionality is much higher than previously thought throughout visual cortex! And so we might need different approaches to understand these high-D data.
In Fig S12 (journals.plos.org/ploscompbiol...) we find power-law spectra in a monkey electrophysiology dataset too.
And the same in mouse Ca-imaging: www.nature.com/articles/s41...
In Fig S12 (journals.plos.org/ploscompbiol...) we find power-law spectra in a monkey electrophysiology dataset too.
And the same in mouse Ca-imaging: www.nature.com/articles/s41...
This is a tradeoff: we lose spectral resolution but at least we can measure the signal there.
This is a tradeoff: we lose spectral resolution but at least we can measure the signal there.
So what we're seeing significantly above zero is not noise.
So what we're seeing significantly above zero is not noise.
Yeah, in principle, noise should definitely inflate the tail of the eigenspectrum (also the rest, but less noticeably).
Yeah, in principle, noise should definitely inflate the tail of the eigenspectrum (also the rest, but less noticeably).
The cross-decomposition method we're using measures variance that generalizes (i) across multiple presentations of the stimuli and (ii) to a held-out test set, so I'm not too worried about that---we are measuring only stimulus-related signal.
(I think you meant low variance?)
The cross-decomposition method we're using measures variance that generalizes (i) across multiple presentations of the stimuli and (ii) to a held-out test set, so I'm not too worried about that---we are measuring only stimulus-related signal.
(I think you meant low variance?)
www.pnas.org/doi/full/10....
www.pnas.org/doi/full/10....
But the sklearn implementation is likely sufficient for most purposes.
But the sklearn implementation is likely sufficient for most purposes.
I've written a GPU-accelerated version that does other stuff too (permutation tests, etc.) but it's unfortunately not quite plug-and-play (github.com/BonnerLab/sc...).
I've written a GPU-accelerated version that does other stuff too (permutation tests, etc.) but it's unfortunately not quite plug-and-play (github.com/BonnerLab/sc...).
Estimates of visual cortex dimensionality have traditionally been much lower (~10s-100), not the unbounded power-law we're reporting here.
Estimates of visual cortex dimensionality have traditionally been much lower (~10s-100), not the unbounded power-law we're reporting here.
I'm also curious how dimensionality depends on task demands, but that's hard to answer with this dataset.
I'm also curious how dimensionality depends on task demands, but that's hard to answer with this dataset.
A nice example is in proceedings.neurips.cc/paper_files/...
A nice example is in proceedings.neurips.cc/paper_files/...
e.g. networks trained on CIFAR-10 often end up lower-dimensional than those trained on CIFAR-100
e.g. networks trained on CIFAR-10 often end up lower-dimensional than those trained on CIFAR-100
I particularly like Figure 7 in arxiv.org/abs/2204.06125 as an example of high-dimensional representations being useful in DNNs.
I particularly like Figure 7 in arxiv.org/abs/2204.06125 as an example of high-dimensional representations being useful in DNNs.
I'm not quite sure what you meant about V4; could you elaborate or point me to a paper?
I'm not quite sure what you meant about V4; could you elaborate or point me to a paper?