Alicia DeVrio
uhleeeeeeeshuh.bsky.social
Alicia DeVrio
@uhleeeeeeeshuh.bsky.social
HCI PhD @ CMU studying power of everyday people to resist harmful AI
also enjoys weaving, musicals, grammar, ice cream, libraries
--> all the other whatever at uhleeeeeeeshuh.com
a bit tangential but I've been pretty interested in "misuse" as a category as well, especially given how LLMs have been marketed as "relevant in any context"/"general use"
September 8, 2025 at 3:11 PM
+ some related work from the team at #ICLR2025 !!
New ICLR blogpost! 🎉 We argue that understanding the impact of anthropomorphic AI is critical to understanding the impact of AI.
April 27, 2025 at 10:55 PM
Yes, I’ll be there — would love to chat and hear more about your work!!
March 9, 2025 at 3:02 PM
& Check out more of our related work from this summer in this great bsky thread: n/n
Our FATE MTL team has been working on a series of projects on anthropomorphic AI systems for which we recently put out a few pre-prints I’m excited about. While working on these we tried to think carefully not only about key research questions but also how we study and write about these systems
March 6, 2025 at 4:00 AM
This paper comes out of a great summer at MSR FATE. Thanks to my coauthors @myra.bsky.social @lisaegede.bsky.social @aolteanu.bsky.social Su Lin Blodgett and our reviewers. Check out the whole paper here: arxiv.org/abs/2502.09870 4/n
A Taxonomy of Linguistic Expressions That Contribute To Anthropomorphism of Language Technologies
Recent attention to anthropomorphism -- the attribution of human-like qualities to non-human objects or entities -- of language technologies like LLMs has sparked renewed discussions about potential n...
arxiv.org
March 6, 2025 at 4:00 AM
Especially important are challenges around the nature of language & tensions involved in shifting conceptions of human-likeness of technology. Check out Section 5.2 of the paper for more on this related to standard language ideology & risks of dehumanizing humans. arxiv.org/abs/2502.09870 3/n
March 6, 2025 at 3:44 AM
Recent discussions have considered when anthropomorphism might be inappropriate. We encourage use of our taxonomy for more targeted identification and mitigation of harmful impacts stemming from anthropomorphism of language technologies. arxiv.org/abs/2502.09870 2/n
March 6, 2025 at 3:44 AM
(in other news pls add me to starter packs 🥺👉👈 I do HCI research on harmful algorithmic systems & the ways that everyday people act to resist them)
November 20, 2024 at 3:25 PM