🧠 Neuroscience — functional rationale for the evolutionary suppression of strong reciprocal motifs.
🤖 NeuroAI — reciprocity as a tunable design parameter in recurrent & neuromorphic networks.
🧠 Neuroscience — functional rationale for the evolutionary suppression of strong reciprocal motifs.
🤖 NeuroAI — reciprocity as a tunable design parameter in recurrent & neuromorphic networks.
Result: the same pattern.
Strong reciprocity consistently undermines memory and representational richness.
Result: the same pattern.
Strong reciprocity consistently undermines memory and representational richness.
- Higher reciprocity → larger spectral radius (instability risk).
- Narrower spectral gap → less dynamical diversity.
- Lower non-normality → weaker transient amplification.
Together → compressed dynamical range.
- Higher reciprocity → larger spectral radius (instability risk).
- Narrower spectral gap → less dynamical diversity.
- Lower non-normality → weaker transient amplification.
Together → compressed dynamical range.
- Increasing reciprocity (link as well as strength reciprocity) reduces memory capacity.
- Representation becomes less diverse (lower kernel rank).
- Effects are strongest in ultra-sparse networks.
- Increasing reciprocity (link as well as strength reciprocity) reduces memory capacity.
- Representation becomes less diverse (lower kernel rank).
- Effects are strongest in ultra-sparse networks.
Across species (macaque, marmoset, mouse), strong reciprocal (symmetric) connections are rare.
This asymmetry is well known anatomically.
But what are its computational consequences?
Across species (macaque, marmoset, mouse), strong reciprocal (symmetric) connections are rare.
This asymmetry is well known anatomically.
But what are its computational consequences?
www.biorxiv.org/content/10.1...
🧵
www.biorxiv.org/content/10.1...
🧵