Yuqing Zhang
yuqing0304.bsky.social
Yuqing Zhang
@yuqing0304.bsky.social
NeLLCom-Lex consists of a two-phase learning setup:
1) Supervised learning (SL): agents “learn to speak” an existing human lexicon.
2) Reinforcement learning (RL): they then communicate and adapt it to communicative needs.

Question: Can such agents start evolving their lexicon like humans do?
November 6, 2025 at 3:39 PM
We use a color naming task 🎨
Each agent must identify a target color among distractors.
In far contexts, general words suffice.
In close contexts, more informative words are necessary.

This is where pragmatic adaptation comes into play: humans choose more informative words when context demands it.
November 6, 2025 at 3:35 PM
But can neural agents learn to be pragmatic and efficient?

It turns out yes! Agents exposed to context adapt. They use more informative words in harder contexts (β). Interaction with context leads to richer lexicons (|W|) with lower system-level informativeness (I_L) and moderate drift (D_L).
November 6, 2025 at 3:21 PM
We then expose agents to varying communicative needs.

Agents facing more challenging contexts (AllClose) develop stronger context sensitivity and reshape word meanings like "mint" to denote narrower color regions.
November 6, 2025 at 3:19 PM
Moreover, communicating in harder contexts (AllClose) allows agents to develop more efficient lexicons, achieving high accuracy with a smaller vocabulary.
November 6, 2025 at 3:18 PM
Ever wondered how our words change their meanings over time, and why languages keep both broad terms (“dog”) and specific ones (“Dalmatian”)?
Our new paper asks that question, but instead of asking humans, we ask neural agents 🤖
🧵👇
November 6, 2025 at 1:52 PM