Heidelberg University NLP Group
banner
hd-nlp.bsky.social
Heidelberg University NLP Group
@hd-nlp.bsky.social
Welcome to the Natural Language Processing Group at the Computational Linguistics Department at @uniheidelberg.bsky.social
led by @anettemfrank.bsky.social #NLProc #ML
Reposted by Heidelberg University NLP Group
I am honored to receive the 2025 #GSCL Best Thesis Award at #KONVENS in Hildesheim for my Master’s thesis, which investigates multilinguality and develops language models for Ancient Greek and Latin. Thank you to my mentors and collaborators. I look forward to what comes next.
September 14, 2025 at 9:13 AM
Frederick's talk upcoming today! Learn about how MLLMs generalize across languages!
Looking at Bruegel's Tower of Babel in Vienna makes you wonder: How can multilingual language models overcome the language barriers? Find out tomorrow!
📍 Level 1 (ironic, right?), Room 1.15-1
🕐 2 PM
#ACL2025NLP
July 28, 2025 at 7:39 AM
Reposted by Heidelberg University NLP Group
How and when do multilingual LMs achieve cross-lingual generalization during pre-training? And why do later, supposedly more advanced checkpoints, lose some language identification abilities in the process? Our #ACL2025 paper investigates.
June 7, 2025 at 10:12 AM
Reposted by Heidelberg University NLP Group
What did Aristotle actually write? We think we know, but reality is messy. As ancient Greek texts traveled through 2,500 years of history, they were copied and recopied countless times, accumulating subtle errors with each generation. Our new #NAACL2025 paper tackles this fascinating challenge.
May 1, 2025 at 11:29 AM
Reposted by Heidelberg University NLP Group
Debates aren’t always black and white—opposing sides often share common ground. These partial agreements are key for meaningful compromises
Presenting “Perspectivized Stance Vectors” (PSVs) — an interpretable method to identify nuanced (dis)agreements

📜 arxiv.org/abs/2502.09644
🧵 More details below
February 21, 2025 at 4:08 PM
🎉 Exciting news from our team!

The final paper of @aicoffeebreak.bsky.social's PhD journey is accepted at #ICLR2025! 🙌 🖼️📄

Check out her original post below for more details on Vision & Language Models (VLMs), their modality use and their self-consistency 🔥
The last paper of my PhD is accepted at ICLR 2025! 🙌 🎊
We investigate the reliance of modern Vision & Language Models (VLMs) on image🖼️ vs. text📄 inputs when generating answers vs. explanations, revealing fascinating insights into their modality use and self-consistency. Takeaways: 👇
Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?
Vision and language model (VLM) decoders are currently the best-performing architectures on multimodal tasks. Next to answers, they are able to produce natural language explanations, either in post-ho...
arxiv.org
January 27, 2025 at 1:04 PM