Indira Sen
banner
indiiigo.bsky.social
Indira Sen
@indiiigo.bsky.social
Junior Faculty at the University of Mannheim || Computational Social Science ∩ Natural Language Processing || Formerly at: RWTH, GESIS || she/her
indiiigo.github.io/
Pinned
Do LLMs represent the people they're supposed simulate or provide personalized assistance to?

We review the current literature in our #ACL2025 Findings paper and investigating what researchers conclude about the demographic representativeness of LLMs:
osf.io/preprints/so...

1/
Reposted by Indira Sen
We're excited about the next edition of our Summer School for Women* in Political Methodology, this time organized by the 💫 local team in Mannheim!
🚨 Join us for the next edition Summer School for Women* in Political Methodology in Mannheim 🚨

7 days of hands-on advanced methods + networking for PhDs, postdocs & early-career researchers.
Free of charge (limited travel support).

Deadline 1 March 2026: summerschoolwpm.org
#methodsky #polisky
February 2, 2026 at 3:19 PM
Reposted by Indira Sen
#cometoGESIS #workwithus #Gastforschungsaufenthalt #researchvisit

We invite Ph.D. students and early career postdocs to come to GESIS. Visiting researchers of the Junior Research Program are involved in our research to publish with GESIS staff, and to develop research ideas and joint projects.
February 2, 2026 at 9:00 AM
Reposted by Indira Sen
Paper accepted to #EACL2026 main conference 🎉
@taniseceron.bsky.social, Sebastian Padó and I test multilingual LLMs before and after English-only fine-tuning and find strong cross-lingual political opinion transfer across five Western languages.

www.arxiv.org/abs/2508.05553
January 29, 2026 at 9:09 AM
Reposted by Indira Sen
Demographic cues (eg, names, dialect) are widely used to study how LLM behavior may change depending on user demographics. Such cues are often assumed interchangeable.

🚨 We show they are not: different cues yield different model behavior for the same group and different conclusions on LLM bias. 🧵👇
January 27, 2026 at 1:07 PM
Reposted by Indira Sen
I had the absolute pleasure to visit @craicexeter.bsky.social, where I laid out an argument for how critical & computational scholars should lead the conversation on AI. We need to expand research on harms, interrogate corporate hype, and support people’s critical understanding these technologies
January 22, 2026 at 4:32 PM
Reposted by Indira Sen
I missed this last month from @tilmanbayer.bsky.social: "AI finds errors in 90% of October's Featured Articles". Great example of human-in-the-loop LLM use for verifying Wikipedia articles. en.wikipedia.org/wiki/Wikiped...
Wikipedia:Wikipedia Signpost/2025-12-01/Opinion - Wikipedia
en.wikipedia.org
January 10, 2026 at 6:25 PM
Reposted by Indira Sen
Many think LLM-simulated participants can transform behavioral science. But there's been a lack of accessible discussion of what it means to validate LLMs for behavioral scientists. Under what conditions can we trust LLMs to learn about human parameters? Our paper maps the validation landscape.
1/
December 18, 2025 at 5:53 PM
Reposted by Indira Sen
Most LLM evals use API calls or offline inference, testing models in a memory-less silo. Our new Patterns paper shows this misses how LLMs actually behave in real user interfaces, where personalization and interaction history shape responses: arxiv.org/abs/2509.19364
December 12, 2025 at 8:42 PM
Reposted by Indira Sen
It's out!!

www.science.org/doi/10.1126/...

Big thank you to my coauthors @small-schulz.bsky.social and @lorenzspreen.bsky.social, and to all participants who discussed 20 political issues over 4 weeks in 6 subreddit, 3 experimental conditions and let us observe.
December 11, 2025 at 10:34 AM
Reposted by Indira Sen
The Center for Information Technology Policy at Princeton invites applications for a Postdoctoral Fellow to work with Andy Guess (Politics/SPIA), Brandon Stewart (Sociology), and me (CS).

puwebp.princeton.edu/AcadHire/app...

Please apply before Sunday, the 13th of December!
December 9, 2025 at 8:51 PM
Reposted by Indira Sen
New paper in Science:

In a platform-independent field experiment, we show that reranking content expressing antidemocratic attitudes and partisan animosity in social media feeds alters affective polarization.

🧵
December 1, 2025 at 7:59 AM
Reposted by Indira Sen
Preprint alert! 🥳

How are social bias and CSS interconnected? 🤔

@aytalina.bsky.social, @janabernhard.bsky.social, @valeriehase.bsky.social, and I argue that social bias shapes CSS as a field and as a methodology. Progress in CSS depends on engaging with both dimensions! osf.io/preprints/so...
OSF
osf.io
November 28, 2025 at 9:08 AM
Reposted by Indira Sen
🌍 Are you new to @icwsm.bsky.social or JQD:DM
and based in a low- or middle-income country?

💡 Submit a project idea, get matched with a mentor, present virtually at ICWSM'26, and prepare a submission for Sept 2026!
📢 Call icwsm.org/2026/submit....
🚀 Apply by Jan 15 forms.gle/A9GkJboP7qi3...
ICWSM 2026: Submit
ICWSM 2026: Submit
icwsm.org
November 20, 2025 at 7:30 PM
Reposted by Indira Sen
🚨 I'm recruiting PhD students in Computer Science at Johns Hopkins University for Fall 2026. If you're interested in AI, HCI, and designing better online platforms and experiences, apply to work with me!
More info: piccardi.me
November 13, 2025 at 3:52 PM
Reposted by Indira Sen
📢 New insights on #GenAI interviewing agents asking sensitive open questions compared to a text-based web survey.

Answers to male agent include more topics, but no evidence of social desirability.

👉 New #OpenAccess paper with @jkhoehne.bsky.social #cneuert in #IJMR.

🌐 doi.org/10.1177/1470...
November 13, 2025 at 8:01 AM
Reposted by Indira Sen
⏳ Only 5 days left to apply!

Please note the updated application link (due to a recent university webpage update):

👉 PhD Candidate in Emotionally and Socially Aware Natural Language Processing
careers.universiteitleiden.nl/job/PhD-Cand...
November 12, 2025 at 10:05 AM
Reposted by Indira Sen
It's the season for PhD apps!! 🥧 🦃 ☃️ ❄️

Apply to Wisconsin CS to research
- Societal impact of AI
- NLP ←→ CSS and cultural analytics
- Computational sociolinguistics
- Human-AI interaction
- Culturally competent and inclusive NLP
with me!

lucy3.github.io/prospective-...
November 11, 2025 at 10:32 PM
Reposted by Indira Sen
Join our CSS department @gesis.org! Postdoc/senior researcher position, tenure track! All info at: www.gesis.org/institut/kar...
Details
GESIS Leibniz Institut für Sozialwissenschaften
www.gesis.org
November 11, 2025 at 2:48 PM
Reposted by Indira Sen
Misinformation research has a causality problem: lab experiments are limited; observational studies confounded.

We used causal inference on 9.9M tweets, quantifying effects in the wild while blocking backdoor paths.

Does misinfo get higher engagement? Are following discussions more emotional? 🧵
OSF
osf.io
November 11, 2025 at 9:59 AM
Reposted by Indira Sen
🚨I'm recruiting a fully funded EPSRC PhD student (start 2026/27) to work with me and
Mohammad Taher Pilehvar on multilingual misinformation and online harms in #NLP.
(the position is open to UK and international students.)
Details and contact information 👇:
www.findaphd.com/phds/project...
Designing Reliable NLP Systems for Cross-Lingual Information Environments at Cardiff University on FindAPhD.com
PhD Project - Designing Reliable NLP Systems for Cross-Lingual Information Environments at Cardiff University, listed on FindAPhD.com
www.findaphd.com
November 9, 2025 at 1:37 PM
Reposted by Indira Sen
LLMs are now widely used in social science as stand-ins for humans—assuming they can produce realistic, human-like text

But... can they? We don’t actually know.

In our new study, we develop a Computational Turing Test.

And our findings are striking:
LLMs may be far less human-like than we think.🧵
Computational Turing Test Reveals Systematic Differences Between Human and AI Language
Large language models (LLMs) are increasingly used in the social sciences to simulate human behavior, based on the assumption that they can generate realistic, human-like text. Yet this assumption rem...
arxiv.org
November 7, 2025 at 11:13 AM
Reposted by Indira Sen
🚨 Content alert 🚨

New recording and workshop materials published!

➡️ Large Language Models for Social Research: Potentials and Challenges
👤 @indiiigo.bsky.social (University of Mannheim)

📺 youtu.be/p5wPJHK-74M
🗒️ github.com/SocialScienc...
Large Language Models for Social Research: Potentials and Challenges
YouTube video by MZES Methods Bites
youtu.be
November 6, 2025 at 2:20 PM
as a reviewer, it should be acceptable to chase ACs and remind them to write their meta reviews already because I want to know which reviewer they're siding with godamnit....
November 5, 2025 at 9:40 AM
Reposted by Indira Sen
👋🏼 I'm at #EMNLP2025 presenting "The Prompt Makes the Person(a): A Systematic Evaluation of Sociodemographic Persona Prompting for LLMs"

🕑 Thu. Nov 6, 12:30 - 13:30
📍 Findings Session 2, Hall C3
🚨New paper alert🚨

🤔 Ever wondered how the way you write a persona prompt affects how well an LLM simulates people?

In our #EMNLP2025 paper, we find that using interview-style persona prompts makes LLM social simulations less biased and more aligned with human opinions.
🧵1/7
November 4, 2025 at 4:39 PM