susanne förster
sufo.bsky.social
susanne förster
@sufo.bsky.social
Part of Agentic Media @mediaofcoop | Former Coordinator and Editor @HKW_Berlin | she/her
Going to present on the notion of task as it let up to the Common Task Framework, the origin of today’s benchmarks!
If you are attending @shothisttech.bsky.social annual meeting in Luxembourg, please come say hello at one of the series of panels “Technologies of Pattern Recognition” organized by @vgoujon.bsky.social and I.
October 10, 2025 at 5:16 PM
Reposted by susanne förster
Today seems like a good day for a shameless self plug of our hallucination article (with @sufo.bsky.social ) where we explain how the term is used in LLM documentation to relieve creators of the responsibility of spreading misinformation, lies and falsities
link.springer.com/article/10.1...
August 8, 2025 at 8:59 AM
Reposted by susanne förster
🗓️ Join us next Tuesday for a talk by Anna Schjøtt Hansen and Dieuwertje Luitse on "The Politics of Machine Learning Evaluation: From Present to Future"!

A presentation followed by a collective discussion with @assiaw.bsky.social, @theolenoir.bsky.social, @alexcampolo.bsky.social and myself ⤵️
Mardi 3 juin, découvrez comment l’évaluation des modèles IA évolue et soulève un enjeu politique majeur.

Une présentation d’ @annaschjoett.bsky.social et @dluitse.bsky.social suivie d’un échange sur le lancement de la collection « The Politics of ML Evaluation » dans Digital Society.

Plus d'info 🔽
The Politics of Machine Learning Evaluation: From Present to Future | médialab Sciences Po
Le médialab reçoit Anna Schjøtt Hansen et Dieuwertje Luitse pour le prochain séminaire du 3 juin 2025. Elles feront une présentation d'ouverture sur les politiques d'évaluation de l'apprentissage auto...
medialab.sciencespo.fr
May 27, 2025 at 4:40 PM
Our article on the metaphor of hallucination and its use by big tech companies in relation to supposedly erroneous outputs from LLMs has finally been published :) @skoopit.bsky.social

link.springer.com/article/10.1...
Between fact and fairy: tracing the hallucination metaphor in AI discourse - AI & SOCIETY
Large and powerful language models such as OpenAI’s GPT model family, Google’s LaMDA and BERT or Meta’s LlaMA are integral to many applications, such as translation, summarization or language generati...
link.springer.com
May 26, 2025 at 1:01 PM
Reposted by susanne förster
⚡Thrilled to see the first papers to our Topical Collection published in Digital Society ✨Thank you @alexcampolo.bsky.social and Allison Jerzak for your fantastic contributions!

👀 For more info and future publications, keep an eye out on link.springer.com/collections/...
May 5, 2025 at 8:09 AM
This will soon be out in AI & Society!
Yes!
We have an article coming out on the term hallucination in LLM documentation soon, one of the claims is that the term itself is made to relieve the creators from the moral burden and responsibility of spreading lies, because the term hallucination is not morally loaded, it "happens unwillingly"
April 8, 2025 at 4:23 PM