Abhilasha Ravichander
banner
lasha.bsky.social
Abhilasha Ravichander
@lasha.bsky.social
Incoming faculty at the Max Planck Institute for Software Systems
Postdoc at UW, working on Natural Language Processing
Recruiting PhD students!

🌐 https://lasharavichander.github.io/
📣 Life update: Thrilled to announce that I’ll be starting as faculty at the Max Planck Institute for Software Systems this Fall!

I’ll be recruiting PhD students in the upcoming cycle, as well as research interns throughout the year: lasharavichander.github.io/contact.html
July 22, 2025 at 4:12 AM
This seems to be known to o1!
March 24, 2025 at 10:06 PM
Intuition: In text, some tokens carry higher “information” than others (are more surprising based on context). We find and mask such tokens, and study if models recover them.

If a recovered token is unpredictable from context, the remaining mechanism must be memorization. (2/5)
March 21, 2025 at 7:08 PM
For a factual recall task about senators educational affiliations, we find that the model often has access to correct information but still hallucinated (Type A errors) [7/n]
January 31, 2025 at 6:27 PM
We are launching HALoGEN💡, a way to systematically study *when* and *why* LLMs still hallucinate.

New work w/ Shrusti Ghela*, David Wadden, and Yejin Choi 💫

📝 Paper: arxiv.org/abs/2501.08292
🚀 Code/Data: github.com/AbhilashaRav...
🌐 Website: halogen-hallucinations.github.io 🧵 [1/n]
January 31, 2025 at 6:27 PM
For a factual recall task about senators educational affiliations, we find that the model often has access to correct information but still hallucinated (Type A errors) [7/n]
January 31, 2025 at 2:25 PM
HALoGEN, is a large-scale dataset for analyzing LLM hallucinations, featuring

✍️ 10,923 prompts across 9 different long-form tasks
🧐 Automatic verifiers that break down AI model responses into individual facts, and check each fact against a reliable knowledge source [2/n]
January 31, 2025 at 2:25 PM