Mohit Chandra
banner
mohit30.bsky.social
Mohit Chandra
@mohit30.bsky.social
PhDing @GeorgiaTech | Previously: @msftresearch.bsky.social, @Microsoft @iiithyderabad | Research: NLP and Social Computing for Healthcare | Opinions are personal

Homepage: https://mohit3011.github.io/

#ResponsibleAI #Human-CenteredAI #NLPforMentalHealth
Hello Athens! 👋☀️

Excited to be attending #FAccT 2025 and presenting our paper “From Lived Experience to Insight” on 24th June at 10:45 AM (New Stage C)

dl.acm.org/doi/10.1145/...

Would love to catch up with old friends and make new ones and talk about AI and mental health 😄
June 23, 2025 at 9:01 PM
Finding #6: We examined the actionability of mitigation advices. Expert responses scored the highest on overall actionability in comparison to all the LLMs.

While LLMs provide less practical and relevant advice, their advice is more clear and specific.

10/11
January 7, 2025 at 9:38 PM
Finding #5: LLMs struggle to provide expert-aligned harm reduction strategies with larger models producing less expert-aligned strategies than smaller ones.

The best medical model aligned with experts ~71% (GPT-4o score) of the time.

9/11
January 7, 2025 at 9:38 PM
Using the ADRA framework, we evaluate LLM alignment with experts across expressed emotion, readability, harm reduction strategies, & actionable advice.

Finding #4: We find that LLMs express similar emotions and tones but provide significantly harder to read responses.

8/11
January 7, 2025 at 9:38 PM
Finding #1: Larger models perform better for ADR detection tasks (Claude3 Opus led with an accuracy score of 77.41%), but this trend does not hold for ADR multiclass classification. Additionally, distinguishing ADR types remains a significant challenge for all models.

5/11
January 7, 2025 at 9:38 PM
We introduce the Psych-ADR, a benchmark with Reddit posts annotated for ADR presence/type, paired with expert-written responses and the ADRA framework to systematically evaluate long-form generations in detecting ADR expressions and delivering mitigation strategies.

4/11
January 7, 2025 at 9:38 PM
Adverse Drug Reactions are among the leading causes of hospitalizations for mental health issues. With existing limitations, LLMs have the potential to detect ADRs and provide mitigation strategies.

But do LLMs align with experts? 🤔 We explore this in our work 👇🏼🧵

shorturl.at/bldCb
1/11
January 7, 2025 at 9:38 PM