Fabio Marson
fabiomarson.bsky.social
Fabio Marson
@fabiomarson.bsky.social
Research fellow, eeg enthusiast, bass player, sometimes footballer, occasionally human.
Post-doc at University of Milano-Bicocca exploring semantics of meaningful meaningless strings of letters
@ercbravenewword
Reposted by Fabio Marson
A fascinating read in @theguardian.com on the psycholinguistics of swearing!

Did you know Germans averaged 53 taboo words, while Brits e Spaniards listed only 16?
Great to see the work of our colleague Simone Sulpizio & Jon Andoni Duñabeitia highlighted! 👏

www.theguardian.com/science/2025...
Italian blasphemy and German ingenuity: how swear words differ around the world
Once swearwords were dismissed as a sign of low intelligence, now researchers argue the ‘power’ of taboo words has been overlooked
www.theguardian.com
October 23, 2025 at 12:27 PM
Reposted by Fabio Marson
Great week at #ESLP2025 in Aix-en-Provence! Huge congrats to our colleagues for their excellent talks on computational models, sound symbolism, and multimodal cognition. Proud of the team and the stimulating discussions!
September 25, 2025 at 10:28 AM
Reposted by Fabio Marson
For those who couldn't attend, the recording of Abhilasha Kumar's seminar on exploring form-meaning interactions in novel word learning and memory search is now available on our YouTube channel!!

Watch the full presentation here:
www.youtube.com/watch?v=VJTs...
Abhilasha Kumar, Beyond Arbitrariness: How a Word's Shape Influences Learning and Memory
YouTube video by Mbs Vector Space Lab
www.youtube.com
September 12, 2025 at 11:42 AM
Reposted by Fabio Marson
Great presentation by @fabiomarson.bsky.social last Saturday at #AMLAP2025! He shared his latest research using EEG to study how we integrate novel semantic representations, “linguistic chimeras”, from context.

Congratulations on a fascinating talk!
September 9, 2025 at 11:10 AM
Reposted by Fabio Marson
1/n Happy to share a new paper with Calogero Zarbo & Marco Marelli! How well do LLMs represent the implicit meaning of familiar and novel compounds? How do they compare with simpler distributional semantics models (DSMs; i.e., word embeddings)?
doi.org/10.1111/cogs...
Conceptual Combination in Large Language Models: Uncovering Implicit Relational Interpretations in Compound Words With Contextualized Word Embeddings
Large language models (LLMs) have been proposed as candidate models of human semantics, and as such, they must be able to account for conceptual combination. This work explores the ability of two LLM...
doi.org
March 19, 2025 at 2:07 PM
Reposted by Fabio Marson
📢 Upcoming Seminar

Words are weird? On the role of lexical ambiguity in language
🗣 Gemma Boleda (Universitat Pompeu Fabra, Spain)
Why is language so ambiguous? Discover how ambiguity balances cognitive simplicity and communicative complexity through large-scale studies.
📍 UniMiB, Room U6-01C, Milan
March 3, 2025 at 1:41 PM