Tom McCoy
@rtommccoy.bsky.social
Assistant professor at Yale Linguistics. Studying computational linguistics, cognitive science, and AI. He/him.
In MAML, a model is exposed to many tasks. After each task, the model's weights are adjusted so that, if it were taught the same task again, it would perform better. As MAML proceeds, the model converges to a state from which it can learn any task in the distribution.
7/n
7/n
May 20, 2025 at 7:12 PM
In MAML, a model is exposed to many tasks. After each task, the model's weights are adjusted so that, if it were taught the same task again, it would perform better. As MAML proceeds, the model converges to a state from which it can learn any task in the distribution.
7/n
7/n