Jesse Geerts
@jessegeerts.bsky.social
Cognitive neuroscientist and AI researcher
Abstracts are only 200 words, so if you're working on a Cosyne abstract simply submit a shortened version of your summary at the page below:
www.ucl.ac.uk/event-ticket...
www.ucl.ac.uk/event-ticket...
UCL Event Ticketing
www.ucl.ac.uk
October 13, 2025 at 3:55 PM
Abstracts are only 200 words, so if you're working on a Cosyne abstract simply submit a shortened version of your summary at the page below:
www.ucl.ac.uk/event-ticket...
www.ucl.ac.uk/event-ticket...
- @leenacvankadara.bsky.social (UCL)
- Netta Cohen (Uni of Leeds)
- @docqhuys.bsky.social (UCL)
- @kenneth-harris.bsky.social (UCL)
- @laklab.bsky.social (Oxford)
- @nathanieldaw.bsky.social (Princeton)
- Netta Cohen (Uni of Leeds)
- @docqhuys.bsky.social (UCL)
- @kenneth-harris.bsky.social (UCL)
- @laklab.bsky.social (Oxford)
- @nathanieldaw.bsky.social (Princeton)
October 13, 2025 at 3:55 PM
- @leenacvankadara.bsky.social (UCL)
- Netta Cohen (Uni of Leeds)
- @docqhuys.bsky.social (UCL)
- @kenneth-harris.bsky.social (UCL)
- @laklab.bsky.social (Oxford)
- @nathanieldaw.bsky.social (Princeton)
- Netta Cohen (Uni of Leeds)
- @docqhuys.bsky.social (UCL)
- @kenneth-harris.bsky.social (UCL)
- @laklab.bsky.social (Oxford)
- @nathanieldaw.bsky.social (Princeton)
Thank you! Yes, I think that’s a fair summary. Another way of looking at it is that pre training on a match and copy task gives it a hint in the “wrong” direction. Our takeaway is that what the transformer learns to implement in-context depends on the pretraining task
June 17, 2025 at 6:34 AM
Thank you! Yes, I think that’s a fair summary. Another way of looking at it is that pre training on a match and copy task gives it a hint in the “wrong” direction. Our takeaway is that what the transformer learns to implement in-context depends on the pretraining task
Check out the Psych Review paper here: psycnet.apa.org/fulltext/202...
APA PsycNet
psycnet.apa.org
June 6, 2025 at 3:11 PM
Check out the Psych Review paper here: psycnet.apa.org/fulltext/202...
read the full paper here: arxiv.org/abs/2506.04289
Relational reasoning and inductive bias in transformers trained on a transitive inference task
Transformer-based models have demonstrated remarkable reasoning abilities, but the mechanisms underlying relational reasoning in different learning regimes remain poorly understood. In this work, we i...
arxiv.org
June 6, 2025 at 2:33 PM
read the full paper here: arxiv.org/abs/2506.04289
June 6, 2025 at 2:31 PM
The key insight: computational strategies underlying ICL aren't fixed but depend on both learning paradigm and pre-training structures. This helps explain when AI systems will generalize beyond their training data.
June 6, 2025 at 2:30 PM
The key insight: computational strategies underlying ICL aren't fixed but depend on both learning paradigm and pre-training structures. This helps explain when AI systems will generalize beyond their training data.