1) Supervised learning (SL): agents “learn to speak” an existing human lexicon.
2) Reinforcement learning (RL): they then communicate and adapt it to communicative needs.
Question: Can such agents start evolving their lexicon like humans do?
1) Supervised learning (SL): agents “learn to speak” an existing human lexicon.
2) Reinforcement learning (RL): they then communicate and adapt it to communicative needs.
Question: Can such agents start evolving their lexicon like humans do?
Each agent must identify a target color among distractors.
In far contexts, general words suffice.
In close contexts, more informative words are necessary.
This is where pragmatic adaptation comes into play: humans choose more informative words when context demands it.
Each agent must identify a target color among distractors.
In far contexts, general words suffice.
In close contexts, more informative words are necessary.
This is where pragmatic adaptation comes into play: humans choose more informative words when context demands it.
It turns out yes! Agents exposed to context adapt. They use more informative words in harder contexts (β). Interaction with context leads to richer lexicons (|W|) with lower system-level informativeness (I_L) and moderate drift (D_L).
It turns out yes! Agents exposed to context adapt. They use more informative words in harder contexts (β). Interaction with context leads to richer lexicons (|W|) with lower system-level informativeness (I_L) and moderate drift (D_L).
Agents facing more challenging contexts (AllClose) develop stronger context sensitivity and reshape word meanings like "mint" to denote narrower color regions.
Agents facing more challenging contexts (AllClose) develop stronger context sensitivity and reshape word meanings like "mint" to denote narrower color regions.
Our new paper asks that question, but instead of asking humans, we ask neural agents 🤖
🧵👇
Our new paper asks that question, but instead of asking humans, we ask neural agents 🤖
🧵👇