Ferdinand Schlatt
fschlatt.bsky.social
Ferdinand Schlatt
@fschlatt.bsky.social
PhD Student, efficient and effective neural IR models 🧠🔎
Reposted by Ferdinand Schlatt
Happy to share that our paper "The Viability of Crowdsourcing for RAG Evaluation" received the Best Paper Honourable Mention at #SIGIR2025! Very grateful to the community for recognizing our work on improving RAG evaluation.

 📄 webis.de/publications...
July 16, 2025 at 9:04 PM
Want to know how to make bi-encoders more than 3x faster with a new backbone encoder model? Check out our talk on the Token-Independent Text Encoder (TITE) #SIGIR2025 in the efficiency track. It pools vectors within the model to improve efficiency dl.acm.org/doi/10.1145/...
July 16, 2025 at 7:28 AM
@mrparryparry.bsky.social presenting our work on reproducing TREC DL 2019 judgements and the implications for evaluating modern ranking models on modern collections. Paper: arxiv.org/abs/2502.20937
Variations in Relevance Judgments and the Shelf Life of Test Collections
The fundamental property of Cranfield-style evaluations, that system rankings are stable even when assessors disagree on individual relevance decisions, was validated on traditional test collections. ...
arxiv.org
July 14, 2025 at 2:49 PM
Thank you Carlos for the shout-out of Lightning IR in the LSR tutorial at #SIGIR2025

If you want to fine your own LSR models, check out our framework at github.com/webis-de/lig...
July 13, 2025 at 2:42 PM
Honored to receive the best short paper award and best paper honourable mention award at #ECIR2025. Thank you to all co-authors @maik-froebe.bsky.social, @hscells.bsky.social, Shengyao Zhuang, @bevankoopman.bsky.social, Guido Zuccon, Benno Stein, @martin-potthast.com, @matthias-hagen.bsky.social 🥳
April 9, 2025 at 12:37 PM
Reposted by Ferdinand Schlatt
Now we have @fschlatt.bsky.social on the #ECIR2025 stage predenting the research on the Set-Encoder.

The paper is online at: webis.de/publications...
April 9, 2025 at 8:00 AM
Reposted by Ferdinand Schlatt
Great post that captures the tension between classic ML approaches and modern deep learning while acknowledging the nuances of both.

“Working with LLMs doesn’t feel the same. It’s like fitting pieces into a pre-defined puzzle instead of building the puzzle itself.”

www.reddit.com/r/MachineLea...
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
www.reddit.com
December 6, 2024 at 2:13 AM
Reposted by Ferdinand Schlatt
The #TREC2024 conference just started. Turns out that BM25 is turning 30 🥳 #TREC #TREC24
November 18, 2024 at 3:04 PM