Esra Dönmez
esradonmez.bsky.social
Esra Dönmez
@esradonmez.bsky.social
PhD student at the IMS (Uni Stuttgart)
Reposted by Esra Dönmez
I'm recruiting multiple PhD students for Fall 2026 in Computer Science at @hopkinsengineer.bsky.social 🍂

Apply to work on AI for social sciences/human behavior, social NLP, and LLMs for real-world applied domains you're passionate about!

Learn more at kristinagligoric.com & help spread the word!
November 5, 2025 at 2:56 PM
Reposted by Esra Dönmez
Does AI argue differently than humans? 🤖
The results of our #EMNLP2025 paper suggest so!

@esradonmez.bsky.social and I will present this work in Poster Session 1, 11:00-12:30 on 5th Nov, Hall C3

🧵👇

aclanthology.org/2025.emnlp-m...
November 3, 2025 at 11:55 PM
Reposted by Esra Dönmez
Attending COLM next week in Montreal? 🇨🇦 Join us on Thursday for a 2-part social! ✨ 5:30-6:30 at the conference venue and 7:00-10:00 offsite! 🌈 Sign up here: forms.gle/oiMK3TLP8ZZc...
October 1, 2025 at 2:40 PM
Reposted by Esra Dönmez
🚨 Are you looking for a PhD in #NLProc dealing with #LLMs?
🎉 Good news: I am hiring! 🎉
The position is part of the “Contested Climate Futures" project. 🌱🌍 You will focus on developing next-generation AI methods🤖 to analyze climate-related concepts in content—including texts, images, and videos.
September 24, 2025 at 7:34 AM
Reposted by Esra Dönmez
Crowdsourcing datasets is very common, but how much of those results come from LLMs these days? How can we prevent it? Much of it is unclear. This is why we’re conducing a survey (~10 min) to gather community experiences, challenges, and solutions.
Share your thoughts 👉https://tinyurl.com/39ab55wf
Qualtrics Survey | Qualtrics Experience Management
The most powerful, simple and trusted way to gather experience data. Start your journey to experience management and try a free account today.
ugent.qualtrics.com
September 5, 2025 at 11:32 AM
Reposted by Esra Dönmez
- Fully funded PhD fellowship on Explainable NLU: apply by 31 October 2025, start in Spring 2026: candidate.hr-manager.net/ApplicationI...

- Open-topic PhD positions: express your interest through ELLIS by 31 October 2025, start in Autumn 2026: ellis.eu/news/ellis-p...

#NLProc #XAI
PhD fellowship in Explainable Natural Language Understanding Department of Computer Science Faculty of SCIENCE University of Copenhagen
The Natural Language Processing Section at the Department of Computer Science, Faculty of Science at the University of Copenhagen invites applicants for a PhD f
candidate.hr-manager.net
September 1, 2025 at 2:20 PM
Reposted by Esra Dönmez
Now that school is starting for lots of folks, it's time for a new release of Speech and Language Processing! Jim and I added all sorts of material for the August 2025 release! With slides to match! Check it out here: web.stanford.edu/~jurafsky/sl...
Speech and Language Processing
Speech and Language Processing
web.stanford.edu
August 24, 2025 at 7:28 PM
Reposted by Esra Dönmez
Identity-Aware AI workhop at #ECAI2025, in beautiful Bologna! Submit by Aug 22: identity-aware-ai.github.io Organized by:
@pranav-nlp.bsky.social Soda Marem Lo, Neele Falk, @gingerinai.bsky.social @davidjurgens.bsky.social @a-lauscher.bsky.social and myself!
Wondering what makes each of us unique and how AI should handle human diversity? 🤔

We're organizing Identity-Aware AI workshop at #ECAI2025 Bologna on Oct 25.

Deadline: Aug 22
Website: identity-aware-ai.github.io
August 7, 2025 at 3:29 PM
Reposted by Esra Dönmez
👏 Upcoming workshop: Perspectivist Approaches to NLP @EMNLP 2025
Focusing on non-aggregated datasets and multi-perspective modeling, with sessions on labeling, modeling, evaluation, and applications.
nlperspectives.di.unito.it
NLPerspectives – Perspectivist Approaches to NLP
nlperspectives.di.unito.it
July 31, 2025 at 2:50 PM
Reposted by Esra Dönmez
I'm sadly not at #IC2S2 😭, but I will be at #ACL2025 in Vienna ☕️ next week!!

Please spread the word that I'm recruiting prospective PhD students: lucy3.notion.site/for-prospect...
For Prospective PhD Students
I’m recruiting PhD students who will begin their degree in Fall 2026! I am an incoming assistant professor at Wisconsin-Madison’s Computer Sciences department, and my research focuses on natural langu...
lucy3.notion.site
July 22, 2025 at 1:09 AM
Reposted by Esra Dönmez
Pls Repost!📢 Unsure about NLP Ethics? Hoping to attend #ACL2025? How about applying for a virtual registration subsidy? All 3? We hear you!✨Tutorial: Navigating Ethical Challenges in NLP✨🗓️Sun, Jul 27, 14:00–17:30 CEST📍@ Hall M, Vienna + 🌍Online (hybrid!) ethics.aclweb.org/tutorials/AC...
ACL 2025 Ethics Tutorial: Navigating Ethical Challenges in NLP - ACL Ethics
ethics.aclweb.org
May 28, 2025 at 9:02 AM
Reposted by Esra Dönmez
🚨🚨 Studying the INTERPLAY of LMs' internals and behavior?

Join our
@colmweb.org
workshop on comprehensivly evaluating LMs.

Deadline: June 23rd
CfP: shorturl.at/sBomu
Page: shorturl.at/FT3fX

We're excited to see your insights and methods!!

See you in Montréal 🇨🇦 #nlproc #interpretability
May 16, 2025 at 9:27 AM
Reposted by Esra Dönmez
#CallforProposals for the 1st funding period in the new established #PriorityProgramme "Robust Assessment & Safe Applicability of Language Modelling", adressing researchers in the interdisciplinary field of the cognitive and computational language sciences. More details ⬇️
www.dfg.de/en/news/news...
Priority Programme “Robust Assessment & Safe Applicability of Language Modelling: Foundations for a New Field of Language Science & Technology (LaSTing)” (SPP 2556)
www.dfg.de
April 30, 2025 at 8:57 AM
Reposted by Esra Dönmez
Reposted by Esra Dönmez
How does the public conceptualize AI? Rather than self-reported measures, we use metaphors to understand the nuance and complexity of people’s mental models. In our #FAccT2025 paper, we analyzed 12,000 metaphors collected over 12 months to track shifts in public perceptions.
May 2, 2025 at 1:19 AM
Reposted by Esra Dönmez
👉 👈 Meta announced that they're changing their models to reduce "left-leaning [political] bias"--that means leaning them to the political "right". Lots to unpack about what that might mean. So I ran a quick "shot in the dark" study...and found a *political right* bias in Meta models. Some notes.🧵
April 22, 2025 at 12:39 AM