Cesare
cesare-spinoso.bsky.social
Cesare
@cesare-spinoso.bsky.social
Hello! I'm Cesare (pronounced Chez-array). I'm a PhD student at McGill/Mila working in NLP/computational pragmatics.

@mcgill-nlp.bsky.social
@mila-quebec.bsky.social
https://cesare-spinoso.github.io/
Thanks to collaborators David Austin, Pablo Piantanida and Jackie Cheung. We also received some amazing feedback from the @mila-quebec.bsky.social @mcgill-nlp.bsky.social community! And thanks to Jennifer Hu, Justine Kao and Polina Tsvilodub for sharing their datasets.
June 26, 2025 at 3:57 PM
Other cool findings:
1. We prove that (RSA)^2 is more expressive than QUD-based RSA.
2. Naively applying RSA to LLMs leads to probability 𝘴𝘱𝘳𝘦𝘢𝘥𝘪𝘯𝘨, not 𝘯𝘢𝘳𝘳𝘰𝘸𝘪𝘯𝘨! Are there better ways to use RSA with LLMs?
3. What if we don't know the rhetorical strategies? We develop a clustering algorithm too!
June 26, 2025 at 3:53 PM
What about LLMs? We integrate LLMs within (RSA)^2 and test them on a new dataset, PragMega+. We show that LLMs augmented with (RSA)^2 produce probability distributions which are more aligned with human expectations.
June 26, 2025 at 3:53 PM
We test (RSA)^2 on two existing figurative language datasets: hyperbolic number expressions (e.g. “This kettle costs 1000$”) and ironic utterances about the weather (e.g. “The weather is amazing” during a Montreal blizzard). We obtain meaning distributions which are compatible with those of humans!
June 26, 2025 at 3:53 PM
We develop (RSA)^2: a 𝘳𝘩𝘦𝘵𝘰𝘳𝘪𝘤𝘢𝘭-𝘴𝘵𝘳𝘢𝘵𝘦𝘨𝘺-𝘢𝘸𝘢𝘳𝘦 probabilistic framework of figurative language. In (RSA)^2 one listener will interpret language literally, another will interpret language ironically, etc. These listeners are marginalized to produce a distribution over possible meanings.
June 26, 2025 at 3:52 PM
Reposted by Cesare
Ada is an undergrad and will soon be looking for PhDs. Gaurav is a PhD student looking for intellectually stimulating internships/visiting positions. They did most of the work without much of my help. Highly recommend them. Please reach out to them if you have any positions.
Language Models Largely Exhibit Human-like Constituent Ordering Preferences
Though English sentences are typically inflexible vis-à-vis word order, constituents often show far more variability in ordering. One prominent theory presents the notion that constituent ordering is ...
arxiv.org
May 1, 2025 at 3:14 PM