Neil Sehgal
banner
nsehgal.bsky.social
Neil Sehgal
@nsehgal.bsky.social
CS PhD @upenn.bsky.social
Computational Social Science @WorldBank
Harvard, Brown alumn
http://sehgal-neil.github.io/
We hope these findings help health systems design more effective & scalable outreach to close preventive care gaps.

Thoughts welcome!
w/
@manueltonneau.bsky.social, @alison-buttenheim.bsky.social, @sharathg.bsky.social + team
July 14, 2025 at 2:07 PM
💡 Bottom line:
🔹 LLMs can generate short, tailored, clinically appropriate messages that move intent particularly for lower-barrier behaviors.
🔹 These messages can fit into portals, texts, or mailed materials.
🔹 They’re low-cost & scalable.

Read more: arxiv.org/abs/2507.08211
Effect of Static vs. Conversational AI-Generated Messages on Colorectal Cancer Screening Intent: a Randomized Controlled Trial
Large language model (LLM) chatbots show increasing promise in persuasive communication. Yet their real-world utility remains uncertain, particularly in clinical settings where sustained conversations...
arxiv.org
July 14, 2025 at 2:07 PM
📈 Results:
✅ Both AI formats significantly boosted stool-test intent (+13 pts) over expert material.
🩺 For colonoscopy, no AI advantage over expert material.

Surprisingly: single AI message ≈ chatbot – despite participants choosing to spend 3.5 minutes longer with the chatbot!
July 14, 2025 at 2:06 PM
🧪 In a randomized trial (n=915), we compared:
1️⃣ No intervention
2️⃣ Expert-written patient materials
3️⃣ Single AI message
4️⃣ AI chatbot using motivational interviewing techniques

Outcome: intent to screen (stool test & colonoscopy) over 12 months.
July 14, 2025 at 2:06 PM
🩺 Why it matters:
Colorectal cancer is the 2nd leading cause of cancer death in the US – but ~1/3 of eligible adults aren’t screened.

We need scalable, persuasive tools to close this gap. Can AI help?
July 14, 2025 at 2:06 PM
Shout-outs to inspiring work: @gordpennycook.bsky.social @dgrand.bsky.social @tomcostello.bsky.social, @jeffhancock.bsky.social @kobihackenburg.bsky.social on AI persuasion & others pushing this field forward 🙌
April 30, 2025 at 9:40 AM
April 30, 2025 at 9:40 AM
Take-home: chatbots can nudge short-term intent, but add little over high-quality public-health materials. AI looks best as an add-on, not a replacement, in vaccine communications.
Link here: arxiv.org/abs/2504.20519
Conversations with AI Chatbots Increase Short-Term Vaccine Intentions But Do Not Outperform Standard Public Health Messaging
Large language model (LLM) based chatbots show promise in persuasive communication, but existing studies often rely on weak controls or focus on belief change rather than behavioral intentions or outc...
arxiv.org
April 30, 2025 at 9:40 AM
In a 15-day follow-up, gains from the reading arm stuck (+7 pts) while chatbot effects faded to ≈0. We also found no spill-over to flu/COVID or general vaccine hesitancy
April 30, 2025 at 9:40 AM
In an RCT with 930 parents (US/CA/UK, with kids old enough for the HPV vaccine): chatbots raised vaccine intent vs. no intervention—but neither variant beat simply reading official public-health materials, with the conversational chatbot doing significantly worse.
April 30, 2025 at 9:40 AM
Thanks for compiling this! Would you be able to add our paper on designing mental health chatbots for Indian adolescents? arxiv.org/abs/2503.08562
Exploring Socio-Cultural Challenges and Opportunities in Designing Mental Health Chatbots for Adolescents in India
Mental health challenges among Indian adolescents are shaped by unique cultural and systemic barriers, including high social stigma and limited professional support. Through a mixed-methods study invo...
arxiv.org
March 12, 2025 at 4:04 PM