Paul Lerner
banner
lernerp.bsky.social
Paul Lerner
@lernerp.bsky.social
Postdoc @mlia_isir@sciences.re (Sorbonne Université, CNRS, ISIR)
/ Teacher @ aivancity
/ Teacher Assistant @ Sorbonne Université

https://paullerner.github.io/
here's what one example of the dataset looks like, there are 72,234 just like this one (I regret my multimodal days where there were pictures in my papers)
October 23, 2025 at 4:09 PM
🤔 ppllm implements windowed PPL, which allows to compute the PPL of arbitrarily long texts.
It aims to be feature complete for many information-theoretic metrics, including Perplexity (PPL), Surprisal, and bits per character (BPC), and their word-level counterparts.
October 15, 2025 at 5:26 PM
Last week, I presented my work on "Assessing the Political Biases of Multilingual LLMs" at the EALM workshop @ TALN 2025 ! Thanks again to the ANR Diké project for organizing the workshop
July 7, 2025 at 7:59 AM
"meticulously" is so absent from this list (from aclanthology.org/2025.coling-... )
June 16, 2025 at 8:25 AM
Amazed at what a COLING paper could look like in the 80's
February 20, 2025 at 9:26 AM
Hope you enjoyed our poster at #AISummit! I'm standing next to Pierre-Antoine Lequeu, @salimhafid.bsky.social, and @manonberriche.bsky.social but there's more people involved! Zoom-in to read their names or learn more about the project here about.make.org/democratic-c...
February 11, 2025 at 9:54 AM
accurate quote for NLP researchers visiting Louvre Abu Dhabi after COLING 2025
January 25, 2025 at 6:30 AM
Makes me think about this quiz I made with former PhD students at LISN, can you spot the fake BERT-based models?
January 22, 2025 at 7:34 PM
Really appreciate the feedback on this paper! It was mainly inspired by Valentin Hofmann et al. DagoBERT/"Superbizarre" papers
January 22, 2025 at 7:08 PM
Hope you enjoyed the presentation!
January 22, 2025 at 7:05 PM
best prompt ever
January 8, 2025 at 11:07 AM
My brother did an awesome PhD 🤩
December 16, 2024 at 5:22 PM
2: Unlike “Likely”, “Unlike” is Unlikely: always with @yvofr.bsky.social Because BPE makes a difference between tokens at the beginning and the end of words, LLMs are unable to generate prefixations
December 16, 2024 at 5:20 PM
1: Towards the Machine Translation of Scientific Neologisms with @yvofr.bsky.social : ever struggled to translate a new term such as pretraining or Reinforcement Learning from Human Feedback? We aim to leverage the definitions of terms to translate them more accurately
December 16, 2024 at 5:17 PM