Erin Grant
@eringrant.me
Senior Research Fellow @ ucl.ac.uk/gatsby & sainsburywellcome.org
{learning, representations, structure} in 🧠💭🤖
my work 🤓: eringrant.github.io
not active: sigmoid.social/@eringrant @eringrant@sigmoid.social, twitter.com/ermgrant @ermgrant
{learning, representations, structure} in 🧠💭🤖
my work 🤓: eringrant.github.io
not active: sigmoid.social/@eringrant @eringrant@sigmoid.social, twitter.com/ermgrant @ermgrant
Function-representation dissociations and the representation-computation link persist in deep nonlinear networks! Using function-invariant reparametrisations (@bsimsek.bsky.social), we break representational identifiability but degrade generalization (a computational consequence).
August 13, 2025 at 11:31 AM
Function-representation dissociations and the representation-computation link persist in deep nonlinear networks! Using function-invariant reparametrisations (@bsimsek.bsky.social), we break representational identifiability but degrade generalization (a computational consequence).
We demonstrate that representation analysis and comparison is ill-posed, giving both false negatives and false positives, unless we work with *task-specific representations*. These are interpretable *and* robust to noise (i.e., representational identifiability comes with computational advantages).
August 13, 2025 at 11:31 AM
We demonstrate that representation analysis and comparison is ill-posed, giving both false negatives and false positives, unless we work with *task-specific representations*. These are interpretable *and* robust to noise (i.e., representational identifiability comes with computational advantages).
We parametrised this solution hierarchy to find differences in handling of task-irrelevant dimensions: Some solutions compress away (creating task-specific, interpretable representations), while others preserve arbitrary structure in null spaces (creating arbitrary, uninterpretable representations).
August 13, 2025 at 11:31 AM
We parametrised this solution hierarchy to find differences in handling of task-irrelevant dimensions: Some solutions compress away (creating task-specific, interpretable representations), while others preserve arbitrary structure in null spaces (creating arbitrary, uninterpretable representations).
To analyse this dissociation in a tractable model of representation learning, we characterize *all* task solutions for two-layer linear networks. Within this solution manifold, we identify a solution hierarchy in terms of what implicit objectives are minimized (in addition to the task objective).
August 13, 2025 at 11:31 AM
To analyse this dissociation in a tractable model of representation learning, we characterize *all* task solutions for two-layer linear networks. Within this solution manifold, we identify a solution hierarchy in terms of what implicit objectives are minimized (in addition to the task objective).
Deep networks have parameter symmetries, so we can walk through solution space, changing all weights and representations, while keeping output fixed. In the worst case, function and representation are *dissociated*.
(Networks can have the same function with the same or different representation.)
(Networks can have the same function with the same or different representation.)
August 13, 2025 at 11:31 AM
Deep networks have parameter symmetries, so we can walk through solution space, changing all weights and representations, while keeping output fixed. In the worst case, function and representation are *dissociated*.
(Networks can have the same function with the same or different representation.)
(Networks can have the same function with the same or different representation.)
This GAC focuses on three debates/questions around benchmarks in cognitive science (the what, why, and how): (1) Should data or theory come first? (2) Should we focus on replication or exploration? (3) What incentives should we build up, if we choose to invest effort as a community?
August 13, 2025 at 7:01 AM
This GAC focuses on three debates/questions around benchmarks in cognitive science (the what, why, and how): (1) Should data or theory come first? (2) Should we focus on replication or exploration? (3) What incentives should we build up, if we choose to invest effort as a community?
Our #CCN2025 GAC debate w/ @gretatuckute.bsky.social, Gemma Roig (www.cvai.cs.uni-frankfurt.de), Jacqueline Gottlieb (gottlieblab.com), Klaus Oberauer, @mschrimpf.bsky.social & @brittawestner.bsky.social asks:
📊 What benchmarks are useful for cognitive science? 💭
2025.ccneuro.org/gac
📊 What benchmarks are useful for cognitive science? 💭
2025.ccneuro.org/gac
August 13, 2025 at 7:01 AM
Our #CCN2025 GAC debate w/ @gretatuckute.bsky.social, Gemma Roig (www.cvai.cs.uni-frankfurt.de), Jacqueline Gottlieb (gottlieblab.com), Klaus Oberauer, @mschrimpf.bsky.social & @brittawestner.bsky.social asks:
📊 What benchmarks are useful for cognitive science? 💭
2025.ccneuro.org/gac
📊 What benchmarks are useful for cognitive science? 💭
2025.ccneuro.org/gac
If you missed it at the #NeurIPS2024 posters! Work led by @leonlufkin.bsky.social on analytical dynamics of localization in simple neural nets, as seen in real+artificial nets and distilled by @aingrosso.bsky.social @sebgoldt.bsky.social.
Leon is a fantastic collaborator + looking for PhD positions!
Leon is a fantastic collaborator + looking for PhD positions!
December 13, 2024 at 4:20 AM
If you missed it at the #NeurIPS2024 posters! Work led by @leonlufkin.bsky.social on analytical dynamics of localization in simple neural nets, as seen in real+artificial nets and distilled by @aingrosso.bsky.social @sebgoldt.bsky.social.
Leon is a fantastic collaborator + looking for PhD positions!
Leon is a fantastic collaborator + looking for PhD positions!