Indira Sen
banner
indiiigo.bsky.social
Indira Sen
@indiiigo.bsky.social
Junior Faculty at the University of Mannheim || Computational Social Science ∩ Natural Language Processing || Formerly at: RWTH, GESIS || she/her
indiiigo.github.io/
Pinned
Do LLMs represent the people they're supposed simulate or provide personalized assistance to?

We review the current literature in our #ACL2025 Findings paper and investigating what researchers conclude about the demographic representativeness of LLMs:
osf.io/preprints/so...

1/
Reposted by Indira Sen
Join our CSS department @gesis.org! Postdoc/senior researcher position, tenure track! All info at: www.gesis.org/institut/kar...
Details
GESIS Leibniz Institut für Sozialwissenschaften
www.gesis.org
November 11, 2025 at 2:48 PM
Reposted by Indira Sen
Misinformation research has a causality problem: lab experiments are limited; observational studies confounded.

We used causal inference on 9.9M tweets, quantifying effects in the wild while blocking backdoor paths.

Does misinfo get higher engagement? Are following discussions more emotional? 🧵
OSF
osf.io
November 11, 2025 at 9:59 AM
Reposted by Indira Sen
🚨I'm recruiting a fully funded EPSRC PhD student (start 2026/27) to work with me and
Mohammad Taher Pilehvar on multilingual misinformation and online harms in #NLP.
(the position is open to UK and international students.)
Details and contact information 👇:
www.findaphd.com/phds/project...
Designing Reliable NLP Systems for Cross-Lingual Information Environments at Cardiff University on FindAPhD.com
PhD Project - Designing Reliable NLP Systems for Cross-Lingual Information Environments at Cardiff University, listed on FindAPhD.com
www.findaphd.com
November 9, 2025 at 1:37 PM
Reposted by Indira Sen
LLMs are now widely used in social science as stand-ins for humans—assuming they can produce realistic, human-like text

But... can they? We don’t actually know.

In our new study, we develop a Computational Turing Test.

And our findings are striking:
LLMs may be far less human-like than we think.🧵
Computational Turing Test Reveals Systematic Differences Between Human and AI Language
Large language models (LLMs) are increasingly used in the social sciences to simulate human behavior, based on the assumption that they can generate realistic, human-like text. Yet this assumption rem...
arxiv.org
November 7, 2025 at 11:13 AM
Reposted by Indira Sen
🚨 Content alert 🚨

New recording and workshop materials published!

➡️ Large Language Models for Social Research: Potentials and Challenges
👤 @indiiigo.bsky.social (University of Mannheim)

📺 youtu.be/p5wPJHK-74M
🗒️ github.com/SocialScienc...
Large Language Models for Social Research: Potentials and Challenges
YouTube video by MZES Methods Bites
youtu.be
November 6, 2025 at 2:20 PM
as a reviewer, it should be acceptable to chase ACs and remind them to write their meta reviews already because I want to know which reviewer they're siding with godamnit....
November 5, 2025 at 9:40 AM
Reposted by Indira Sen
👋🏼 I'm at #EMNLP2025 presenting "The Prompt Makes the Person(a): A Systematic Evaluation of Sociodemographic Persona Prompting for LLMs"

🕑 Thu. Nov 6, 12:30 - 13:30
📍 Findings Session 2, Hall C3
🚨New paper alert🚨

🤔 Ever wondered how the way you write a persona prompt affects how well an LLM simulates people?

In our #EMNLP2025 paper, we find that using interview-style persona prompts makes LLM social simulations less biased and more aligned with human opinions.
🧵1/7
November 4, 2025 at 4:39 PM
Reposted by Indira Sen
New opinion paper out with Esther Ploeger (Aalborg University): We Need to Measure Data Diversity in NLP — Better and Broader at #EMNLP2025 (main) aclanthology.org/2025.emnlp-m...
We Need to Measure Data Diversity in NLP — Better and Broader
Dong Nguyen, Esther Ploeger. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. 2025.
aclanthology.org
November 4, 2025 at 3:43 PM
Reposted by Indira Sen
Honored to be invited! Looking forward to the conference to learn about new directions in complex networks research and to share my work on Network Inequality and Network Fairness. See you in Zaragoza!
📢 Keynotes unlocked: 10/12. Thrilled to have Lisette Espín-Noboa @lespin.bsky.social as a keynote speaker in Complenet’26! Join us to learn more about #socialinequality and #networks!
🌐 Info & registration: complenet.weeblysite.com
🚨 Call for contributions open – submit by Nov 15, 2025!
November 3, 2025 at 8:36 AM
Social simulations with LLMs can have many interesting applications, but LLMs suffer from biases, e.g., stereotypical representations of the groups being simulated.

In our paper led by @marlutz.bsky.social, we show that some of these biases can be mitigated via the structure of persona prompts! 🧵⬇️
🚨New paper alert🚨

🤔 Ever wondered how the way you write a persona prompt affects how well an LLM simulates people?

In our #EMNLP2025 paper, we find that using interview-style persona prompts makes LLM social simulations less biased and more aligned with human opinions.
🧵1/7
November 2, 2025 at 4:36 PM
Reposted by Indira Sen
There’s plenty of evidence for political bias in LLMs, but very few evals reflect realistic LLM use cases — which is where bias actually matters.

IssueBench, our attempt to fix this, is accepted at TACL, and I will be at #EMNLP2025 next week to talk about it!

New results 🧵
Are LLMs biased when they write about political issues?

We just released IssueBench – the largest, most realistic benchmark of its kind – to answer this question more robustly than ever before.

Long 🧵with spicy results 👇
October 29, 2025 at 4:12 PM
Reposted by Indira Sen
*JOB* As part of the @assured.bsky.social project, we are searching for a person to work in our Secure Data Center team at @gesis.org to create safe researcher trainings. www.gesis.org/en/institute...
The position is open until filled, so do not hesitate to reach our directly if you are interested!
Details
GESIS Leibniz Institut für Sozialwissenschaften
www.gesis.org
October 21, 2025 at 9:54 AM
Reposted by Indira Sen
Thrilled to talk about how seemingly small decisions in silicon sampling can have a large impact on simulated survey responses 👀 Join us on Oct 29th! 👈
🚨 Upcoming #CS3Meeting 🚨

@wanlo.bsky.social talks about analytic flexibility in silicon samples on October 29, 3:15 to 4:00 PM CET).

Great opportunity to gain novel insights into how survey responses can be generated with #LLMs.

Sign up now: ww3.unipark.de/uc/cs3_meeti...
October 21, 2025 at 7:42 AM
Reposted by Indira Sen
Wikipedia is seeing a significant decline in human traffic because more people are getting the information that’s on Wikipedia via generative AI chatbots that were trained on its articles and search engines that summarize them without actually clicking to the site

www.404media.co/wikipedia-sa...
Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors
“With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”
www.404media.co
October 17, 2025 at 12:45 PM
Reposted by Indira Sen
Join us as postdoc at the Inequality Discourse Observatory at the University of Konstanz: stellen.uni-konstanz.de/jobposting/7...
We will do epic research between Linguistics and Computational Social Science at the Cluster of Politics of Inequality. Feel free to DM if you have any questions.
One postdoctoral Research Position
Deadline: November 15th, 2025
stellen.uni-konstanz.de
October 13, 2025 at 3:06 PM
Come join next Wednesday if you want to rant about society's love-hate relationship with LLMs!
🚨 Upcoming: "Large Language Models for Social Research: Potentials and Challenges"

👤 Indira Sen (University of Mannheim)

🗓️ Wed, October 22, 13:45-15:15 CET

📺 Register for the live stream: us02web.zoom.us/meeting/regi...

🔗 socialsciencedatalab.mzes.uni-mannheim.de/page/events/
October 16, 2025 at 9:32 AM
Reposted by Indira Sen
🚨 Are you looking for a PhD in #NLProc dealing with #LLMs?
🎉 Good news: I am hiring! 🎉
The position is part of the “Contested Climate Futures" project. 🌱🌍 You will focus on developing next-generation AI methods🤖 to analyze climate-related concepts in content—including texts, images, and videos.
September 24, 2025 at 7:34 AM
Reposted by Indira Sen
We are hiring multiple PhD and postdocs for two newly funded projects at the intersection of mental health and political polarization at the CS Dept at Aalto, Finland. The PIs are Juhi Kulshrestha, Talayeh Aledavood, and Mikko Kivelä.

Full call text and link to apply: www.aalto.fi/en/open-posi...
September 17, 2025 at 10:22 AM
Reposted by Indira Sen
🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825
September 12, 2025 at 10:33 AM
Reposted by Indira Sen
How can an imitative model like an LLM outperform the experts it is trained on? Our new COLM paper outlines three types of transcendence and shows that each one relies on a different aspect of data diversity. arxiv.org/abs/2508.17669
August 29, 2025 at 9:46 PM
Reposted by Indira Sen
Come join and organise the workshop with us!
Excited for WOAH’s 10th anniversary? 😍

We're launching an open call for new organisers!
Our goal: diversify the team and bring in fresh perspectives.

🗓️ Apply by September 12
🔗 forms.gle/aiFs35vwDXnt...
August 25, 2025 at 1:17 PM
Reposted by Indira Sen
If you want to nominate yourself to be the organizer of the next Argument Mining workshop @argminingorg.bsky.social‬, fill in this form: docs.google.com/forms/d/e/1F... Deadline: Aug 22nd 13.00 CEST!
ArgMining 2026 Workshop Organising Committee Application
docs.google.com
August 20, 2025 at 12:14 PM
Reposted by Indira Sen
New publication, out in Political Analysis:

There is an increasing array of tools to measure facets of morality in political language. But while they ostensibly measure the same concept, do they actually?

I and @fhopp.bsky.social set out to see what happens.
Moral Foundation Measurements Fail to Converge on Multilingual Party Manifestos | Political Analysis | Cambridge Core
Moral Foundation Measurements Fail to Converge on Multilingual Party Manifestos
www.cambridge.org
August 19, 2025 at 7:52 AM
Reposted by Indira Sen
The Call for #EMNLP2025 @emnlpmeeting.bsky.social student volunteers is out:
2025.emnlp.org/calls/volunt...
Please fill out the form by 20 Sep 2026 : forms.gle/qfTkVGyDitXi...
For questions, you can contact emnlp2025-student-volunteer-chairs [at] googlegroups [dot] com
Call for Volunteers
Official website for the 2025 Conference on Empirical Methods in Natural Language Processing
2025.emnlp.org
August 15, 2025 at 4:40 PM