Sebastián Vallejo Vera
banner
svallejovera.bsky.social
Sebastián Vallejo Vera
@svallejovera.bsky.social
Assistant Professor at Western University | Legislative politics, gender and politics, racism and politics, and NLP (and politics). http://svallejovera.com
Reposted by Sebastián Vallejo Vera
This is not entirely surprising: you train a model with huge amounts of human-created biased data, it will probably replicate this behaviour. The main takeaway here is that LLMs are biased in a very LLM way. They cling to these cues and do not self-correct (like human coders).
September 29, 2025 at 9:04 PM
So we find the same direction of the effect, but greater magnitude. In any case, keep this in mind when using LLM as annotators, and check the paper for some additional ‘best practices’ when using LLM. Topes!
September 29, 2025 at 9:04 PM
This is not entirely surprising: you train a model with huge amounts of human-created biased data, it will probably replicate this behaviour. The main takeaway here is that LLMs are biased in a very LLM way. They cling to these cues and do not self-correct (like human coders).
September 29, 2025 at 9:04 PM
We tested how LLM annotate text on immigration by only changing the party names on the text. LLM were more likely to label statements as negative when applying a right-leaning party cues, and positive when applying a left -leaning party cues.
September 29, 2025 at 9:04 PM