Aviv Netanyahu
banner
avivnet.bsky.social
Aviv Netanyahu
@avivnet.bsky.social
PhD candidate @MIT_CSAIL
We evaluate our approach extensively and demonstrate that we successfully learn novel tasks and generate corresponding agent plans and motion in (1) unseen configurations and (2) in composition with training tasks.
December 6, 2024 at 3:58 PM
Tasks are often defined by a policy, reward, or trajectory, requiring retraining for new tasks. Inspired by few-shot visual concept learning via inverting generative models, we infer tasks represented as latent vectors without retraining.
December 6, 2024 at 3:58 PM
Learning new tasks with imitation learning often requires hundreds of demos. Check out our #NeurIPS paper in which we learn new tasks from few demos by inverting them into the latent space of a generative model pre-trained on a set of base tasks.

avivne.github.io/ftl-igm/
December 6, 2024 at 3:58 PM