Martin Engelcke
martinengelcke.bsky.social
Martin Engelcke
@martinengelcke.bsky.social
Senior Research Scientist at Google DeepMind. Views my own.
Pinned
Our work on "Latent learning: episodic memory complements parametric learning by enabling flexible reuse of experiences" led by @lampinen.bsky.social and with Effie Li, @arslanchaudhry.bsky.social, and James McClelland is now available on arXiv!

Link: arxiv.org/abs/2509.16189

Thread: 1/
Why does AI sometimes fail to generalize, and what might help? In a new paper (arxiv.org/abs/2509.16189), we highlight the latent learning gap — which unifies findings from language modeling to agent navigation — and suggest that episodic memory complements parametric learning to bridge it. Thread:
Latent learning: episodic memory complements parametric learning by enabling flexible reuse of experiences
When do machine learning systems fail to generalize, and what mechanisms could improve their generalization? Here, we draw inspiration from cognitive science to argue that one weakness of machine lear...
arxiv.org
Reposted by Martin Engelcke
Nice tweet thread from @dannypsawyer on work exploring how well frontier models like GPT, Claude, and Gemini explore in interactive, multi-turn settings - to be presented at NeurIPS workshops this December!
Happy to announce that our work has been accepted to workshops on Multi-turn Interactions and Embodied World Models at #NeurIPS2025! Frontier foundation models are incredible, but how well can they explore in interactive environments?
Paper👇
arxiv.org/abs/2412.06438
🧵1/13
October 10, 2025 at 5:18 PM
Reposted by Martin Engelcke
Happy to announce that our work has been accepted to workshops on Multi-turn Interactions and Embodied World Models at #NeurIPS2025! Frontier foundation models are incredible, but how well can they explore in interactive environments?
Paper👇
arxiv.org/abs/2412.06438
🧵1/13
October 10, 2025 at 5:11 PM
Our work on "Latent learning: episodic memory complements parametric learning by enabling flexible reuse of experiences" led by @lampinen.bsky.social and with Effie Li, @arslanchaudhry.bsky.social, and James McClelland is now available on arXiv!

Link: arxiv.org/abs/2509.16189

Thread: 1/
Why does AI sometimes fail to generalize, and what might help? In a new paper (arxiv.org/abs/2509.16189), we highlight the latent learning gap — which unifies findings from language modeling to agent navigation — and suggest that episodic memory complements parametric learning to bridge it. Thread:
Latent learning: episodic memory complements parametric learning by enabling flexible reuse of experiences
When do machine learning systems fail to generalize, and what mechanisms could improve their generalization? Here, we draw inspiration from cognitive science to argue that one weakness of machine lear...
arxiv.org
September 29, 2025 at 11:02 AM
Hello, World!
July 27, 2025 at 10:55 AM