cartathomas.bsky.social
@cartathomas.bsky.social
Reposted
🔔 Join our MAGELLAN talk on July 2!

We'll explore how LLM agents can monitor their own learning progress and choose what to learn next, like curious humans 🤔

1h presentation + 1h Q&A on autotelic agents & more!

📅 July 2, 4:30 PM CEST
🎟️ forms.gle/1PC2fxJx1PZYfqFr7
🚀 Introducing 🧭MAGELLAN—our new metacognitive framework for LLM agents! It predicts its own learning progress (LP) in vast natural language goal spaces, enabling efficient exploration of complex domains.🌍✨Learn more: 🔗 arxiv.org/abs/2502.07709 #OpenEndedLearning #LLM #RL
MAGELLAN: Metacognitive predictions of learning progress guide...
Open-ended learning agents must efficiently prioritize goals in vast possibility spaces, focusing on those that maximize learning progress (LP). When such autotelic exploration is achieved by LLM...
arxiv.org
June 25, 2025 at 3:14 PM
Reposted
🚨New preprint🚨
When testing LLMs with questions, how can we know they did not see the answer in their training? In this new paper we propose a simple out of the box and fast method to spot contamination on short texts with @stepalminteri.bsky.social and Pierre-Yves Oudeyer !
November 15, 2024 at 1:48 PM
🚀 Introducing 🧭MAGELLAN—our new metacognitive framework for LLM agents! It predicts its own learning progress (LP) in vast natural language goal spaces, enabling efficient exploration of complex domains.🌍✨Learn more: 🔗 arxiv.org/abs/2502.07709 #OpenEndedLearning #LLM #RL
MAGELLAN: Metacognitive predictions of learning progress guide...
Open-ended learning agents must efficiently prioritize goals in vast possibility spaces, focusing on those that maximize learning progress (LP). When such autotelic exploration is achieved by LLM...
arxiv.org
March 24, 2025 at 3:09 PM