Eddie Yang
@eddieyang.bsky.social
New paper: LLMs are increasingly used to label data in political science. But how reliable are these annotations, and what are the consequences for scientific findings? What are best practices? Some new findings from a large empirical evaluation.
Paper: eddieyang.net/research/llm_annotation.pdf
Paper: eddieyang.net/research/llm_annotation.pdf
October 20, 2025 at 1:57 PM
New paper: LLMs are increasingly used to label data in political science. But how reliable are these annotations, and what are the consequences for scientific findings? What are best practices? Some new findings from a large empirical evaluation.
Paper: eddieyang.net/research/llm_annotation.pdf
Paper: eddieyang.net/research/llm_annotation.pdf
Great analogy to connect AI to many canonical political science questions. Political behavior has led the way in studying AI. Excited to see institutions catch up😀
My "AI as Governance" piece is now out at @annualreviews.bsky.social of Political Science. It should be free access to everyone and I'm very happy with how it worked out (the second half is an extended spin of Applied Gopnikism to political science @alisongopnik.bsky.social @cshalizi.bsky.social
AI as Governance | Annual Reviews
Political scientists have had remarkably little to say about artificial intelligence (AI), perhaps because they are dissuaded by its technical complexity and by current debates about whether AI might ...
doi.org
June 19, 2025 at 8:39 PM
Great analogy to connect AI to many canonical political science questions. Political behavior has led the way in studying AI. Excited to see institutions catch up😀
If no resource constraint, what open-weight LLM would you use in your research (for data labeling, coding etc.)?
May 7, 2025 at 11:51 PM
If no resource constraint, what open-weight LLM would you use in your research (for data labeling, coding etc.)?
Awesome work! Love to see different approaches to this problem.
1/9
We are excited to share our new working paper:
arxiv.org/abs/2502.12323
If you use ML predictions (like remote-sensed data) as outcomes, the resulting regression coefficients can be biased by measurement error. With @megan-ayers.bsky.social @mdgordo.bsky.social @eliana-stone.bsky.social
We are excited to share our new working paper:
arxiv.org/abs/2502.12323
If you use ML predictions (like remote-sensed data) as outcomes, the resulting regression coefficients can be biased by measurement error. With @megan-ayers.bsky.social @mdgordo.bsky.social @eliana-stone.bsky.social
Adversarial Debiasing for Unbiased Parameter Recovery
Advances in machine learning and the increasing availability of high-dimensional data have led to the proliferation of social science research that uses the predictions of machine learning models as p...
arxiv.org
March 18, 2025 at 6:20 PM
Awesome work! Love to see different approaches to this problem.
Really interesting read. Refreshing perspective.
1. @alisongopnik.bsky.social, Cosma Shalizi, James Evans and myself have a new piece in Science on "AI" Large Models, pushing back against much of the collective wisdom about what they can and can't do. Official below, unpaywalled at henryfarrell.net/large-ai-mod... . So why this now?
Large AI models are cultural and social technologies
Implications draw on the history of transformative information systems from the past
www.science.org
March 18, 2025 at 6:18 PM
Really interesting read. Refreshing perspective.