Myra Cheng
myra.bsky.social
Myra Cheng
@myra.bsky.social
PhD candidate @ Stanford NLP
https://myracheng.github.io/
Reposted by Myra Cheng
No better time to start learning about that #AI thing everyone's talking about...

📢 I'm recruiting PhD students in Computer Science or Information Science @cornellbowers.bsky.social!

If you're interested, apply to either department (yes, either program!) and list me as a potential advisor!
November 6, 2025 at 4:19 PM
Reposted by Myra Cheng
As of June 2025, 66% of Americans have never used ChatGPT.

Our new position paper, Attention to Non-Adopters, explores why this matters: AI research is being shaped around adopters—leaving non-adopters’ needs, and key LLM research opportunities, behind.

arxiv.org/abs/2510.15951
October 21, 2025 at 5:12 PM
Reposted by Myra Cheng
I'll be at COLM next week! Let me know if you want to chat! @colmweb.org

@neilrathi.bsky.social will be presenting our work on multilingual overconfidence in language models and the effects on human overreliance!

arxiv.org/pdf/2507.06306
arxiv.org
October 3, 2025 at 5:33 PM
Reposted by Myra Cheng
🚨 New preprint 🚨

Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.

Yet, people preferred sycophantic chatbots and viewed them as unbiased!

osf.io/preprints/ps...

Thread 🧵
October 1, 2025 at 3:16 PM
AI always calling your ideas “fantastic” can feel inauthentic, but what are sycophancy’s deeper harms? We find that in the common use case of seeking AI advice on interpersonal situations—specifically conflicts—sycophancy makes people feel more right & less willing to apologize.
October 3, 2025 at 10:53 PM
Thoughtful NPR piece about ChatGPT relationship advice! Thanks for mentioning our research :)
August 5, 2025 at 2:38 PM
Reposted by Myra Cheng
#acl2025 I think there is plenty of evidence for the risks of anthropomorphic AI behavior and design (re: keynote) -- find @myra.bsky.social and I if you want to chat more about this or our "Dehumanizing Machines" ACL 2025 paper
Our FATE MTL team has been working on a series of projects on anthropomorphic AI systems for which we recently put out a few pre-prints I’m excited about. While working on these we tried to think carefully not only about key research questions but also how we study and write about these systems
July 29, 2025 at 7:45 AM
Reposted by Myra Cheng
New paper hot off the press www.nature.com/articles/s41...

We analysed over 40,000 computer vision papers from CVPR (the longest standing CV conf) & associated patents tracing pathways from research to application. We found that 90% of papers & 86% of downstream patents power surveillance

1/
Computer-vision research powers surveillance technology - Nature
An analysis of research papers and citing patents indicates the extensive ties between computer-vision research and surveillance.
www.nature.com
June 25, 2025 at 5:29 PM
Do people actually like human-like LLMs? In our #ACL2025 paper HumT DumT, we find a kind of uncanny valley effect: users dislike LLM outputs that are *too human-like*. We thus develop methods to reduce human-likeness without sacrificing performance.
June 12, 2025 at 12:07 AM
Dear ChatGPT, Am I the Asshole?
While Reddit users might say yes, your favorite LLM probably won’t.
We present Social Sycophancy: a new way to understand and measure sycophancy as how LLMs overly preserve users' self-image.
May 21, 2025 at 4:51 PM
How does the public conceptualize AI? Rather than self-reported measures, we use metaphors to understand the nuance and complexity of people’s mental models. In our #FAccT2025 paper, we analyzed 12,000 metaphors collected over 12 months to track shifts in public perceptions.
May 2, 2025 at 1:19 AM
New ICLR blogpost! 🎉 We argue that understanding the impact of anthropomorphic AI is critical to understanding the impact of AI.
April 27, 2025 at 9:55 PM
Reposted by Myra Cheng
How can we better think and talk about human-like qualities attributed to language technologies like LLMs? In our #CHI2025 paper, we taxonomize how text outputs from cases of user interactions with language technologies can contribute to anthropomorphism. arxiv.org/abs/2502.09870 1/n
March 6, 2025 at 3:43 AM
Check out our recent work studying anthropormophic AI!
Our FATE MTL team has been working on a series of projects on anthropomorphic AI systems for which we recently put out a few pre-prints I’m excited about. While working on these we tried to think carefully not only about key research questions but also how we study and write about these systems
March 5, 2025 at 10:04 PM
Reposted by Myra Cheng
New preprint!
Metaphors shape how people understand politics, but measuring them (& their real-world effects) is hard.

We develop a new method to measure metaphor & use it to study dehumanizing metaphor in 400K immigration tweets Link: bit.ly/4i3PGm3

#NLP #NLProc #polisky #polcom #compsocialsci
🐦🐦
February 20, 2025 at 7:59 PM
Love this!!
There's been a lot of work on "culture" in NLP, but not much agreement on what it is.

A position paper by me, @dbamman.bsky.social, and @ibleaman.bsky.social on cultural NLP: what we want, what we have, and how sociocultural linguistics can clarify things.

Website: naitian.org/culture-not-...

1/n
February 28, 2025 at 3:34 AM