Alexandra Olteanu
aolteanu.bsky.social
Alexandra Olteanu
@aolteanu.bsky.social
Ethical/Responsible AI. Rigor in AI. Opinions my own. Principal Researcher @ Microsoft Research. Grumpy eastern european in north america. Lovingly nitpicky.
Pinned
We have to talk about rigor in AI work and what it should entail. The reality is that impoverished notions of rigor do not only lead to some one-off undesirable outcomes but can have a deeply formative impact on the scientific integrity and quality of both AI research and practice 1/
Unexpected (amount of) snow day
November 11, 2025 at 4:42 AM
Reposted by Alexandra Olteanu
Our forthcoming NeurIPS position paper, led by @aolteanu.bsky.social, makes this argument (along with several related ones) in more depth. Rigorous AI/ML work should flow from explicit and rigorous premises, not just have a final evaluation that checks some rigor boxes. arxiv.org/abs/2506.14652
Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor
In AI research and practice, rigor remains largely understood in terms of methodological rigor -- such as whether mathematical, statistical, or computational methods are correctly applied. We argue th...
arxiv.org
November 7, 2025 at 12:19 PM
I wish folks would use more precise terminology than "AI sycophancy." Not all validating behaviours/interactions are sycophantic. By definition, for them to be sycophantic there needs to be an underlying intention to e.g., gain advantage or favour. Intention is something AI systems do not have.
October 18, 2025 at 1:44 AM
Love this analogy
People who coax chatbots into sensible answers are basically opening and closing the fridge until it contains something you wanna eat, yes, eventually you get hungrier & eat the stuff in there. But what changed was your cognition. The fridge stayed the same. You changed your mind about the contents.
October 1, 2025 at 10:23 PM
This was accepted to #NeurIPS 🎉🎊

TL;DR Impoverished notions of rigor can have a formative impact on AI work. We argue for a broader conception of what rigorous work should entail & go beyond methodological issues to include epistemic, normative, conceptual, reporting & interpretative considerations
We have to talk about rigor in AI work and what it should entail. The reality is that impoverished notions of rigor do not only lead to some one-off undesirable outcomes but can have a deeply formative impact on the scientific integrity and quality of both AI research and practice 1/
September 29, 2025 at 11:13 PM
Reposted by Alexandra Olteanu
"Epistemic rigor, however, does not necessarily require specific epistemological commitments or choices but rather that those commitments and choices be made explicit."

www.arxiv.org/abs/2506.14652
Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor
In AI research and practice, rigor remains largely understood in terms of methodological rigor -- such as whether mathematical, statistical, or computational methods are correctly applied. We argue th...
www.arxiv.org
September 12, 2025 at 2:09 PM
Listening to a workshop panel at #acl2025 I am realizing that we are saying more or less the same things and having more or less the same conversations for so many years
July 31, 2025 at 11:52 AM
#acl2025 I think there is plenty of evidence for the risks of anthropomorphic AI behavior and design (re: keynote) -- find @myra.bsky.social and I if you want to chat more about this or our "Dehumanizing Machines" ACL 2025 paper
Our FATE MTL team has been working on a series of projects on anthropomorphic AI systems for which we recently put out a few pre-prints I’m excited about. While working on these we tried to think carefully not only about key research questions but also how we study and write about these systems
July 29, 2025 at 7:45 AM
Reposted by Alexandra Olteanu
In a stunning moment of self-delusion, the Wall Street Journal headline writers admitted that they don't know how LLM chatbots work.
July 21, 2025 at 1:48 AM
Who is attending @aclmeeting.bsky.social in Vienna? Reach out or find me there if you want to chat! #acl2025nlp
July 19, 2025 at 4:10 PM
Reposted by Alexandra Olteanu
My university has announced a fund to essentially poach doctoral students from US institutions. DM me if you do work on the history/social impacts of AI and are interested in being poached 😂
July 17, 2025 at 8:17 PM
Not sure who needs to hear this but what people want AI systems to do, what AI systems do, and what people believe AI systems do are not the same thing. Just because one wants or believes AI systems do or can do certain things, doesn't mean they actually do those things.
July 16, 2025 at 10:14 PM
Reposted by Alexandra Olteanu
If you're at @icmlconf.bsky.social this week, come check out our poster on "Position: Evaluating Generative AI Systems Is a Social Science Measurement Challenge" presented by the amazing @afedercooper.bsky.social from 11:30am--1:30pm PDT on Weds!!! icml.cc/virtual/2025...
ICML Poster Position: Evaluating Generative AI Systems Is a Social Science Measurement ChallengeICML 2025
icml.cc
July 15, 2025 at 6:35 PM
Reposted by Alexandra Olteanu
Do you have strong programming skills but need research experience doing meaningful & exciting CSS projects before heading off to a top graduate school for computational social science PhD? Apply now to predoc with me,
@dggoldst.bsky.social @jakehofman.bsky.social www.microsoft.com/en-us/resear...
Predoctoral Research Assistant (Contract) – Computational Social Science - Microsoft Research
Are you a recent college graduate wishing to gain research experience prior to pursuing a Ph.D. in fields related to computational social science (CSS)? Do you have a deep love of “playing with data”—...
www.microsoft.com
July 10, 2025 at 3:48 PM
Reposted by Alexandra Olteanu
Someone asked me today how to get better at scientific writing. I'm not the best person to ask because I find my own writing very inadequate! But the tips I thought of were:

1. Practice, and practice with co-authors who are better writers than you. Observe how they make edits and copy them.

(1/n)
July 4, 2025 at 10:46 AM
FAccT is such a special community & many of us have invested a lot of service time/effort to support it over the years. I do believe engaging with uncomfortable questions & dialogue is important even when there is criticism (which can be hard to hear, can feel unfair/demotivating & sucks) #facct2025
June 27, 2025 at 9:39 AM
Reposted by Alexandra Olteanu
Flattered and shocked for our paper to receive the #facct2025 best paper award.
🏆 Announcing the #FAccT2025 best paper awards! 🏆

Congratulations to all the authors of the three best papers and three honorable mention papers.

Be sure to check out their presentations at the conference next week!

facct-blog.github.io/2025-06-20/b...
Announcing Best Paper Awards
The Best Paper Award Committee was chaired this year by Alex Chouldechova and included six Area Chairs. The committee selected three papers for the Best Paper Award and recognized three additional pap...
facct-blog.github.io
June 21, 2025 at 1:16 AM
Two years after the craft session on theories of change in responsible AI, I am glad to see this discussion taking central stage as a keynote panel #facct2025
June 25, 2025 at 11:16 AM
There is a lot of talk and effort to figure out how genAI is different (I am also guilty of this!) -- the reality is that genAI is not that different and genAI is not that new either; it was hard to evaluate in the past, and it is still as hard to evaluate now #facct2025
June 23, 2025 at 7:17 AM
Reposted by Alexandra Olteanu
Your #FAccT2025 General Chairs @sciorestis.bsky.social, @metaxa.net, and I, reporting from the venue.

We're looking forward to welcoming you to the Athens Conservatoire or online!
June 22, 2025 at 7:09 PM
Who is going to @facct.bsky.social? I will be arriving in Athens tomorrow late morning and looking forward to catching up with old and new friends at FAccT ☀️
June 21, 2025 at 9:52 AM
We have to talk about rigor in AI work and what it should entail. The reality is that impoverished notions of rigor do not only lead to some one-off undesirable outcomes but can have a deeply formative impact on the scientific integrity and quality of both AI research and practice 1/
June 18, 2025 at 11:48 AM
Reposted by Alexandra Olteanu
Alright, people, let's be honest: GenAI systems are everywhere, and figuring out whether they're any good is a total mess. Should we use them? Where? How? Do they need a total overhaul?

(1/6)
June 15, 2025 at 12:20 AM
Reposted by Alexandra Olteanu
I'm so excited this paper is finally online!!! 🎉 We had so much fun working on this with @emmharv.bsky.social!!! Thread below summarizing our contributions...
📣 "Understanding and Meeting Practitioner Needs When Measuring Representational Harms Caused by LLM-Based Systems" is forthcoming at #ACL2025NLP - and you can read it now on arXiv!

🔗: arxiv.org/pdf/2506.04482
🧵: ⬇️
June 10, 2025 at 7:12 PM