Eleanor Holton
banner
eleanor-holton.bsky.social
Eleanor Holton
@eleanor-holton.bsky.social
Postdoc studying learning and decision-making @ Princeton Neuroscience Institute

https://eleanorholton.github.io/
We could capture this mixture of behaviour by tweaking the training regime of ANNs, (‘rich’ vs. ‘lazy’) shifting them towards shared representations (enabling transfer/generalisation but at the cost of interference) versus separated representations. (7/8)
February 24, 2025 at 11:36 AM
While humans learning similar tasks showed more interference than those learning dissimilar tasks, this wasn’t the case for everyone. Some people avoided interference, but they were also worse at transfer to new tasks & at generalisation within a task! (6/8)
February 24, 2025 at 11:36 AM
In ANNs, this can be explained by whether tasks share solutions. Similar tasks were learned by adapting existing representations which are corrupted in the process. Dissimilar tasks were learned as orthogonal representations, reducing interference. (5/8)
February 24, 2025 at 11:36 AM
When Task A & B were similar (‘Near’), both humans & ANNs learned faster—but at a cost. Greater transfer across tasks led to higher interference compared to learning dissimilar tasks (‘Far’) (4/8)
February 24, 2025 at 11:36 AM
We taught humans and ANNs two sequential rule-learning tasks (Task A then Task B), and then re-tested their knowledge of the first (Task A). We studied how patterns of transfer to Task B, and interference on return to Task A, differed as a function of task similarity (3/8)
February 24, 2025 at 11:36 AM
New preprint out with @summerfieldlab.bsky.social! When does new learning interfere with existing knowledge? We compare continual learning in humans and artificial neural networks, revealing similar patterns of transfer & catastrophic interference (1/8) osf.io/preprints/ps...
February 24, 2025 at 11:36 AM