Niranjan
niranjanb.bsky.social
Niranjan
@niranjanb.bsky.social
NLP Researcher
Language Understanding and Reasoning Lab
Stony Brook University
Reposted by Niranjan
Some personal updates:
- I've completed my PhD at @unccs.bsky.social! 🎓
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor 💙
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! 🔎
May 20, 2025 at 5:58 PM
Reposted by Niranjan
Our paper "Misattribution Matters: Quantifying Unfairness in Authorship Attribution" got accepted to #ACL2025!
@niranjanb.bsky.social @ajayp95.bsky.social

Arxiv link coming hopefully soon!
May 16, 2025 at 3:01 AM
Reposted by Niranjan
🔥 BIG CONGRATS to Elias (and UT Austin)! Really proud of you -- it has been a complete pleasure to work with Elias and see him grow into a strong PI on *all* axes 🤗

Make sure to apply for your PhD with him -- he is an amazing advisor and person! 💙
Extremely excited to announce that I will be joining
@utaustin.bsky.social Computer Science in August 2025 as an Assistant Professor! 🎉
May 5, 2025 at 10:00 PM
Reposted by Niranjan
If you are at #AISTATS2025 and are interested in concept erasure, talk to @somnathbrc.bsky.social at Poster Session 1 on Saturday May 3.
May 3, 2025 at 12:47 AM
Reposted by Niranjan
I’ll be presenting Meta-Reasoning Improves Tool Use in Large Language Models at #NAACL25 tomorrow Thursday May 1st from 2 until 3.30pm in Hall 3! Come check it out and have a friendly chat if you’re interested in LLM reasoning and tools 🙂 #NAACL
April 30, 2025 at 8:58 PM
Reposted by Niranjan
Thrilled that our paper won 🏆 Best Paper Runner-Up 🏆 at #NAACL25!!

Our work (REL-A.I.) introduces an evaluation framework that measures human reliance on LLMs and reveals how contextual features like anthropomorphism, subject, and user history can significantly influence user reliance behaviors.
April 29, 2025 at 10:43 PM
Reposted by Niranjan
🚀 Excited to share a new interp+agents paper: 🐭🐱 MICE for CATs: Model-Internal Confidence Estimation for Calibrating Agents with Tools appearing at #NAACL2025

This was work done @msftresearch.bsky.social last summer with Jason Eisner, Justin Svegliato, Ben Van Durme, Yu Su, and Sam Thomson

1/🧵
April 29, 2025 at 1:41 PM
I'll do the advisory advertising: @ykl7.bsky.social‬ is a fantastic researcher and is passionate about being in academia. He has this amazing ability to simply get things done! Happy to say more in a letter or over a chat but if you are going to @naaclmeeting.bsky.social (#NAACL2025) ping him.
I'm headed to #NAACL2025 ✈️ in Albuquerque 🏜️Looking for postdoc positions in the US, so if you're hiring (or know someone who is), let's chat at the conference! Also organizing #WNU2025 so make sure to swing by the workshop on May 4
April 29, 2025 at 6:40 PM
Reposted by Niranjan
We are launching HALoGEN💡, a way to systematically study *when* and *why* LLMs still hallucinate.

New work w/ Shrusti Ghela*, David Wadden, and Yejin Choi 💫

📝 Paper: arxiv.org/abs/2501.08292
🚀 Code/Data: github.com/AbhilashaRav...
🌐 Website: halogen-hallucinations.github.io 🧵 [1/n]
January 31, 2025 at 6:27 PM
Reposted by Niranjan
🟢 Announcing the #NAACL2025 Award Winners!

The Best Paper and Best Theme Paper winners will present at our closing session

2025.naacl.org/blog/best-pa...
April 25, 2025 at 4:04 PM
Reposted by Niranjan
🚨Real-world retrieval is messy: queries are ambiguous or docs conflict & have incorrect/irrelevant info. How can we jointly address these problems?

➡️RAMDocs: challenging dataset w/ ambiguity, misinformation & noise
➡️MADAM-RAG: multi-agent framework, debates & aggregates evidence across sources

🧵⬇️
April 18, 2025 at 5:06 PM
Reposted by Niranjan
Check out @juand-r.bsky.social and @wenxuand.bsky.social 's work on improving generator-validator gaps in LLMs! I really like the formulation of the G-V gap we present, and I was pleasantly surprised by how well the ranking-based training closed the gap. Looking forward to following up in this area!
One of the ways that LLMs can be inconsistent is the "generator-validator gap," where LLMs deem their own answers incorrect.

🎯 We demonstrate that ranking-based discriminator training can significantly reduce this gap, and improvements on one task often generalize to others!

🧵👇
April 16, 2025 at 6:18 PM
Reposted by Niranjan
For years it’s been an open question — how much is a language model learning and synthesizing information, and how much is it just memorizing and reciting?

Introducing OLMoTrace, a new feature in the Ai2 Playground that begins to shed some light. 🔦
April 9, 2025 at 1:16 PM
Reposted by Niranjan
Please share it within your circles! edin.ac/3DDQK1o
March 13, 2025 at 11:59 AM
Reposted by Niranjan
Excited to announce the COLM 2025 keynote speakers: Shirley Ho, Nicholas Carlini, @lukezettlemoyer.bsky.social, and Tom Griffiths!

See you in October in Montreal!
March 10, 2025 at 2:34 PM
Thanks @mohitbansal.bsky.social for the wonderful Distinguished Lecture on agents and multimodal generation. This got so many of us here at Stony Brook excited for the potential in these areas. Also, thanks for spending time with our students & sharing your wisdom. It was a pleasure hosting you!
Excited to host the wonderful @mohitbansal.bsky.social as part of Stony Brook CS Distinguished Lecture Series on Dec 6th. Looking forward to hearing about his team's fantastic work on Planning Agents for Collaborative Reasoning and Multimodal Generation. More here: tinyurl.com/jkmex3e9
December 9, 2024 at 12:49 PM
Excited to host the wonderful @mohitbansal.bsky.social as part of Stony Brook CS Distinguished Lecture Series on Dec 6th. Looking forward to hearing about his team's fantastic work on Planning Agents for Collaborative Reasoning and Multimodal Generation. More here: tinyurl.com/jkmex3e9
December 3, 2024 at 3:07 PM
Reposted by Niranjan
I noticed a lot of starter packs skewed towards faculty/industry, so I made one of just NLP & ML students: go.bsky.app/vju2ux

Students do different research, go on the job market, and recruit other students. Ping me and I'll add you!
November 23, 2024 at 7:54 PM
Reposted by Niranjan
✨New pre-print!✨ Successful language technologies should work for a wide variety of languages. But some languages have systematically worse performance than others. In this paper we ask whether performance differences are due to morphological typology. Spoiler: I don’t think so! #NLP #linguistics
November 22, 2024 at 3:04 PM
Reposted by Niranjan
Using LLMs for query or document expansion in retrieval (e.g. HyDE and Doc2Query) have scores going 📈

But do these approaches work for all IR models and for different types of distribution shifts? Turns out its actually more 📉 🚨

📝 (arxiv soon): orionweller.github.io/assets/pdf/L...
November 18, 2024 at 10:30 AM
Reposted by Niranjan
🚨 We are refreshing the 🌎 AppWorld (appworld.dev) leaderboard with all the new coding and/or tool-use LMs.

❓ What would you like to be included?

🔌 Self-plugs are welcome!!

x.com/harsh3vedi/s...
November 21, 2024 at 2:11 PM