Abhilasha Ravichander
banner
lasha.bsky.social
Abhilasha Ravichander
@lasha.bsky.social
Incoming faculty at the Max Planck Institute for Software Systems
Postdoc at UW, working on Natural Language Processing
Recruiting PhD students!

🌐 https://lasharavichander.github.io/
Pinned
📣 Life update: Thrilled to announce that I’ll be starting as faculty at the Max Planck Institute for Software Systems this Fall!

I’ll be recruiting PhD students in the upcoming cycle, as well as research interns throughout the year: lasharavichander.github.io/contact.html
Reposted by Abhilasha Ravichander
𝙒𝙚'𝙧𝙚 𝙝𝙞𝙧𝙞𝙣𝙜 𝙣𝙚𝙬 𝙛𝙖𝙘𝙪𝙡𝙩𝙮 𝙢𝙚𝙢𝙗𝙚𝙧𝙨!

KSoC: utah.peopleadmin.com/postings/190... (AI broadly)

Education + AI:
- utah.peopleadmin.com/postings/189...
- utah.peopleadmin.com/postings/190...

Computer Vision:
- utah.peopleadmin.com/postings/183...
November 7, 2025 at 11:35 PM
Reposted by Abhilasha Ravichander
Which, whose, and how much knowledge do LLMs represent?

I'm excited to share our preprint answering these questions:

"Epistemic Diversity and Knowledge Collapse in Large Language Models"

📄Paper: arxiv.org/pdf/2510.04226
💻Code: github.com/dwright37/ll...

1/10
October 13, 2025 at 11:25 AM
Reposted by Abhilasha Ravichander
Go check Alex's poster today (Wed) in Suzhou! #EMNLP2025

I'm still so proud of our work (led by @lasha.bsky.social) on CondaQA, so we had to ask what would happen if we tried to create high-quality reasoning-over-text benchmarks now that LLMs are available. Turns out, we'd make an easier benchmark!
I'll be in Suzhou 🇨🇳 at #EMNLP this week presenting "What has been Lost with Synthetic Evaluation?" done with @anamarasovic.bsky.social & @lasha.bsky.social! 🎉

📍Findings Session 1 - Hall C
📅 Wed, November 5, 13:00 - 14:00

arxiv.org/abs/2505.22830
November 4, 2025 at 10:44 PM
Reposted by Abhilasha Ravichander
I'll be in Suzhou 🇨🇳 at #EMNLP this week presenting "What has been Lost with Synthetic Evaluation?" done with @anamarasovic.bsky.social & @lasha.bsky.social! 🎉

📍Findings Session 1 - Hall C
📅 Wed, November 5, 13:00 - 14:00

arxiv.org/abs/2505.22830
November 3, 2025 at 11:03 AM
Reposted by Abhilasha Ravichander
AI always calling your ideas “fantastic” can feel inauthentic, but what are sycophancy’s deeper harms? We find that in the common use case of seeking AI advice on interpersonal situations—specifically conflicts—sycophancy makes people feel more right & less willing to apologize.
October 3, 2025 at 10:53 PM
Reposted by Abhilasha Ravichander
Interested in language models, brains, and concepts? Check out our COLM 2025 🔦 Spotlight paper!

(And if you’re at COLM, come hear about it on Tuesday – sessions Spotlight 2 & Poster 2)!
October 4, 2025 at 2:15 AM
Reposted by Abhilasha Ravichander
LLMs are trained to mimic a “true” distribution—their reducing cross-entropy then confirms they get closer to this target while training. Do similar models approach this target distribution in similar ways, though? 🤔 Not really! Our new paper studies this, finding 4-convergence phases in training 🧵
October 1, 2025 at 6:08 PM
It is PhD application season again 🍂 For those looking to do a PhD in AI, these are some useful resources 🤖:

1. Examples of statements of purpose (SOPs) for computer science PhD programs: cs-sop.org [1/4]
CS PhD Statements of Purpose
cs-sop.org is a platform intended to help CS PhD applicants. It hosts a database of example statements of purpose (SoP) shared by previous applicants to Computer Science PhD programs.
cs-sop.org
October 1, 2025 at 8:37 PM
Reposted by Abhilasha Ravichander
Happy to see this work accepted to #EMNLP2025! 🎉🎉🎉
August 20, 2025 at 8:49 PM
I am recruiting emergency reviewers for *SEM 2025 (The 14th Joint Conference on Lexical and Computational Semantics). Please DM me if you might be able to contribute a review within the next few days 🙏
August 12, 2025 at 12:44 AM
Reposted by Abhilasha Ravichander
We’re thrilled to congratulate Dr. Abhilasha Ravichander (@lasha.bsky.social) and her team for receiving the Outstanding Paper Award at #acl2025 for their work titled "HALoGEN: Fantastic LLM Hallucinations and Where to Find Them"! 🏆✨

#ACL #LLMs #Hallucination #WiAIR #WomenInAI
August 8, 2025 at 4:49 PM
Reposted by Abhilasha Ravichander
Status Update: I'm in the middle of my move from Denmark to Colorado! If I seem to be missing for the 1-2 weeks, that is the main reason. Picture me lost amidst suitcases, boxes, moving pods, and far too many books.

Copenhagen friends, I'm here for a couple more days! Please stop by P1 to say bye 🥺
August 6, 2025 at 4:11 PM
Reposted by Abhilasha Ravichander
🎙️ New Women in AI Research #WiAIR episode coming Aug 6!

We talk to @lasha.bsky.social about LLM Hallucination, her award-winning HALoGEN benchmark, and how we can better evaluate hallucinations in language models.
👇 What’s inside:
1/
August 1, 2025 at 2:35 PM
Reposted by Abhilasha Ravichander
Director at Max Planck - a unique position! The Open Call for Expressions of Interest in Max Planck Directorships is open now and can be submitted by the 31st of October 2025. ➡️ mpg.de/directors - Please share the Open Call among potential candidates.
August 1, 2025 at 9:06 AM
Super super thrilled that HALoGEN, our study of LLM hallucinations and their potential origins in training data, received an ✨Outstanding Paper Award✨ at ACL!

Joint work w/i Shrusti Ghela*, David Wadden, and Yejin Choi

bsky.app/profile/lash...
We are launching HALoGEN💡, a way to systematically study *when* and *why* LLMs still hallucinate.

New work w/ Shrusti Ghela*, David Wadden, and Yejin Choi 💫

📝 Paper: arxiv.org/abs/2501.08292
🚀 Code/Data: github.com/AbhilashaRav...
🌐 Website: halogen-hallucinations.github.io 🧵 [1/n]
July 30, 2025 at 7:53 PM
Reposted by Abhilasha Ravichander
Join Abhilasha's lab, she is an awesome researcher and mentor! I can attest, being her collaborator was great fun 🤩
📣 Life update: Thrilled to announce that I’ll be starting as faculty at the Max Planck Institute for Software Systems this Fall!

I’ll be recruiting PhD students in the upcoming cycle, as well as research interns throughout the year: lasharavichander.github.io/contact.html
July 24, 2025 at 1:25 PM
Reposted by Abhilasha Ravichander
People applying for NLP PhDs, work with Abhilasha -- she is awesome!!
📣 Life update: Thrilled to announce that I’ll be starting as faculty at the Max Planck Institute for Software Systems this Fall!

I’ll be recruiting PhD students in the upcoming cycle, as well as research interns throughout the year: lasharavichander.github.io/contact.html
July 23, 2025 at 7:58 AM
📣 Life update: Thrilled to announce that I’ll be starting as faculty at the Max Planck Institute for Software Systems this Fall!

I’ll be recruiting PhD students in the upcoming cycle, as well as research interns throughout the year: lasharavichander.github.io/contact.html
July 22, 2025 at 4:12 AM
Reposted by Abhilasha Ravichander
I'm sadly not at #IC2S2 😭, but I will be at #ACL2025 in Vienna ☕️ next week!!

Please spread the word that I'm recruiting prospective PhD students: lucy3.notion.site/for-prospect...
For Prospective PhD Students
I’m recruiting PhD students who will begin their degree in Fall 2026! I am an incoming assistant professor at Wisconsin-Madison’s Computer Sciences department, and my research focuses on natural langu...
lucy3.notion.site
July 22, 2025 at 1:09 AM
Reposted by Abhilasha Ravichander
💡Beyond math/code, instruction following with verifiable constraints is suitable to be learned with RLVR.
But the set of constraints and verifier functions is limited and most models overfit on IFEval.
We introduce IFBench to measure model generalization to unseen constraints.
July 3, 2025 at 9:06 PM
Reposted by Abhilasha Ravichander
𝐖𝐡𝐚𝐭 𝐇𝐚𝐬 𝐁𝐞𝐞𝐧 𝐋𝐨𝐬𝐭 𝐖𝐢𝐭𝐡 𝐒𝐲𝐧𝐭𝐡𝐞𝐭𝐢𝐜 𝐄𝐯𝐚𝐥𝐮𝐚𝐭𝐢𝐨𝐧?

(arxiv.org/abs/2505.22830)

I'm happy to announce that the preprint release of my first project is online! Developed with the amazing support of @lasha.bsky.social & @anamarasovic.bsky.social
What Has Been Lost with Synthetic Evaluation?
Large language models (LLMs) are increasingly used for data generation. However, creating evaluation benchmarks raises the bar for this emerging paradigm. Benchmarks must target specific phenomena, pe...
arxiv.org
June 4, 2025 at 10:24 PM
Reposted by Abhilasha Ravichander
🚨 Preprint alert 🚨

𝐂𝐚𝐧 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐭𝐲 𝐍𝐨𝐭𝐞𝐬 𝐑𝐞𝐩𝐥𝐚𝐜𝐞 𝐏𝐫𝐨𝐟𝐞𝐬𝐬𝐢𝐨𝐧𝐚𝐥 𝐅𝐚𝐜𝐭-𝐂𝐡𝐞𝐜𝐤𝐞𝐫𝐬?

(arxiv.org/abs/2502.14132)

Fact-checking agencies have come under intense scrutiny in recent months regarding their role in combating misinformation on social media.
February 21, 2025 at 10:30 AM
Reposted by Abhilasha Ravichander
*SEM 2025: Direct submissions deadline is now EXTENDED to June 13, 2025! *SEM will be co-located with EMNLP in Suzhou, China. It will be a hybrid event, virtual attendance possible (more info on this will be posted later). Full CFP: starsem2025.github.io/cfp

#NLP #NLProc
Call for Papers
Official website of the 14th Joint Conference on Lexical and Computational Semantics
starsem2025.github.io
May 28, 2025 at 3:38 AM
HALoGEN will appear at #ACL2025, see you in Vienna!
We are launching HALoGEN💡, a way to systematically study *when* and *why* LLMs still hallucinate.

New work w/ Shrusti Ghela*, David Wadden, and Yejin Choi 💫

📝 Paper: arxiv.org/abs/2501.08292
🚀 Code/Data: github.com/AbhilashaRav...
🌐 Website: halogen-hallucinations.github.io 🧵 [1/n]
May 16, 2025 at 12:00 PM
This is an awesome awesome-llm-unlearning resource, maintained by Chris Liu : github.com/chrisliu298/...
GitHub - chrisliu298/awesome-llm-unlearning: A resource repository for machine unlearning in large language models
A resource repository for machine unlearning in large language models - chrisliu298/awesome-llm-unlearning
github.com
May 13, 2025 at 2:53 PM