ERCbravenewword
banner
ercbravenewword.bsky.social
ERCbravenewword
@ercbravenewword.bsky.social
Exploring how new words convey novel meanings in ERC Consolidator project #BraveNewWord🧠Unveiling language and cognition insights🔍Join our research journey!
https://bravenewword.unimib.it/
New study in JML @Marco_Marelli
@mariakna.bsky.social, @kathyrastle.bsky.social

How does morphological knowledge serve as a powerful heuristic for vocabulary growth and reading efficiency? How do readers navigate noisy text to learn the meanings of affixes?

www.sciencedirect.com/science/arti...
Morphemes in the wild: Modelling affix learning from the noisy landscape of natural text
Morphological knowledge serves as a powerful heuristic for vocabulary growth and contributes significantly to the speed and efficiency of reading. Whi…
www.sciencedirect.com
January 26, 2026 at 9:48 AM
Our seminar archive is available online for those interested in the latest research on language and cognition.

The latest upload features Prof. Giovanni Cassani (Tilburg University), discussing how humans and LLMs interpret novel words in context.

www.youtube.com/channel/UClH...
Mbs Vector Space Lab
www.youtube.com
January 20, 2026 at 8:35 AM
Join us for our next seminar featuring Prof. Giovanni Cassani (Tilburg Uni). We will explore how humans and LLMs interpret novel words in context.

🗓️ Today, Jan 19, 2:00 PM CET 📍 Bicocca (U6 - Sala Lauree) 🌐 Join online: meet.google.com/suf-ybti-oop
January 12, 2026 at 12:19 PM
How does the brain handle semantic composition?

Our new Cerebral Cortex paper shows the left inferior frontal gyrus (BA45) does it automatically, even when task-irrelevant. We used fMRI + computational models.

Congrats Marco Ciapparelli, Marco Marelli & team!

doi.org/10.1093/cerc...
Compositionality in the semantic network: a model-driven representational similarity analysis
Abstract. Semantic composition allows us to construct complex meanings (e.g., “dog house”, “house dog”) from simpler constituents (“dog”, “house”). Neuroim
academic.oup.com
October 31, 2025 at 6:19 AM
Reposted by ERCbravenewword
🚨 New publication: How to improve conceptual clarity in psychological science?

Thrilled to see this article with @ruimata.bsky.social out. We discuss how LLMs can be leveraged to map, clarify, and generate psychological measures and constructs.

Open access article: doi.org/10.1177/0963...
October 23, 2025 at 7:27 AM
A fascinating read in @theguardian.com on the psycholinguistics of swearing!

Did you know Germans averaged 53 taboo words, while Brits e Spaniards listed only 16?
Great to see the work of our colleague Simone Sulpizio & Jon Andoni Duñabeitia highlighted! 👏

www.theguardian.com/science/2025...
Italian blasphemy and German ingenuity: how swear words differ around the world
Once swearwords were dismissed as a sign of low intelligence, now researchers argue the ‘power’ of taboo words has been overlooked
www.theguardian.com
October 23, 2025 at 12:27 PM
Join us for our next seminar! We're excited to host Hsieh Cheng-Yu (University of London)

He'll discuss "Making sense from the parts: What Chinese compounds tell us about reading," exploring how we process ambiguity & meaning consistency

🗓️ 27th Oct ⏰ 2PM (CET)📍UniMiB 💻 meet.google.com/zvk-owhv-tfw
October 19, 2025 at 7:35 AM
Reposted by ERCbravenewword
I'm sharing a Colab notebook on using large language models for cognitive science! GitHub repo: github.com/MarcoCiappar...

It's geared toward psychologists & linguists and covers extracting embeddings, predictability measures, comparing models across languages & modalities (vision). see examples 🧵
July 18, 2025 at 1:40 PM
Reposted by ERCbravenewword
New paper! 🚨 I argue that LLMs represent a synthesis between distributed and symbolic approaches to language, because, when exposed to language, they develop highly symbolic representations and processing mechanisms in addition to distributed ones.
arxiv.org/abs/2502.11856
September 30, 2025 at 1:16 PM
Reposted by ERCbravenewword
Important fMRI/RSA study by @marcociapparelli.bsky.social et al. Compositional (multiplicative) representations of compounds/phrases in left IFG (BA45), mSTS, ATL; left AG encodes constituents, not their composition, weighing the right element more, vice versa IFG 🧠🧩
academic.oup.com/cercor/artic...
Compositionality in the semantic network: a model-driven representational similarity analysis
Abstract. Semantic composition allows us to construct complex meanings (e.g., “dog house”, “house dog”) from simpler constituents (“dog”, “house”). Neuroim
academic.oup.com
September 26, 2025 at 9:29 AM
Great week at #ESLP2025 in Aix-en-Provence! Huge congrats to our colleagues for their excellent talks on computational models, sound symbolism, and multimodal cognition. Proud of the team and the stimulating discussions!
September 25, 2025 at 10:28 AM
Reposted by ERCbravenewword
📣The chapter "𝗦𝗽𝗲𝗰𝗶𝗳𝗶𝗰𝗶𝘁𝘆: 𝗠𝗲𝘁𝗿𝗶𝗰 𝗮𝗻𝗱 𝗡𝗼𝗿𝗺𝘀"
w/@mariannabolog.bsky.social is now online & forthcoming in the #ElsevierEncyclopedia of Language & Linguistics
🔍 Theoretical overview, quantification tools, and behavioral evidence on specificity.
👉 Read: urly.it/31c4nm
@abstractionerc.bsky.social
September 18, 2025 at 8:57 AM
Reposted by ERCbravenewword
The dataset includes over 240K fixations and 150K word-level metrics, with saccade, fixation, and (word) interest area reports. Preprint osf.io/preprints/os..., data osf.io/hx2sj/. Work conducted with @davidecrepaldi.bsky.social and Maria Ktori. (2/2)
OSF
osf.io
August 22, 2025 at 6:49 PM
Reposted by ERCbravenewword
How can we reduce conceptual clutter in the psychological sciences?

@ruimata.bsky.social and I propose a solution based on a fine-tuned 🤖 LLM (bit.ly/mpnet-pers) and test it for 🎭 personality psychology.

The paper is finally out in @natrevpsych.bsky.social: go.nature.com/4bEaaja
March 11, 2025 at 10:57 AM
For those who couldn't attend, the recording of Abhilasha Kumar's seminar on exploring form-meaning interactions in novel word learning and memory search is now available on our YouTube channel!!

Watch the full presentation here:
www.youtube.com/watch?v=VJTs...
Abhilasha Kumar, Beyond Arbitrariness: How a Word's Shape Influences Learning and Memory
YouTube video by Mbs Vector Space Lab
www.youtube.com
September 12, 2025 at 11:42 AM
Reposted by ERCbravenewword
Happy to share that our work on semantic composition is out now -- open access -- in Cerebral Cortex!

With Marco Marelli (@ercbravenewword.bsky.social), @wwgraves.bsky.social & @carloreve.bsky.social.

doi.org/10.1093/cerc...
September 12, 2025 at 9:15 AM
Great presentation by @fabiomarson.bsky.social last Saturday at #AMLAP2025! He shared his latest research using EEG to study how we integrate novel semantic representations, “linguistic chimeras”, from context.

Congratulations on a fascinating talk!
September 9, 2025 at 11:10 AM
For those who couldn't attend, the recording of Prof. Harald Baayen's seminar on morphological productivity and the Discriminative Lexicon Model is now available on our YouTube channel.

Watch the full presentation here:
www.youtube.com/watch?v=zN7G...
The Computational Approach to Morphological Productivity | Harald Baayen at Bicocca
YouTube video by Mbs Vector Space Lab
www.youtube.com
September 9, 2025 at 10:45 AM
New seminar announcement!

Exploring form-meaning interactions in novel word learning and memory search
Abhilasha Kumar (Assistant Professor, Bowdoin College)

A fantastic opportunity to delve into how we learn new words and retrieve them from memory.

💻 Join remotely: meet.google.com/pay-qcpv-sbf
August 27, 2025 at 11:06 AM
📢 Upcoming Seminar!

A computational approach to morphological productivity using the Discriminative Lexicon Model
Professor Harald Baayen (University of Tübingen, Germany)

🗓️ September 8, 2025
2:00 PM - 3:30 PM
📍 UniMiB, Room U6-01C, Milan
🔗 Join remotely: meet.google.com/dkj-kzmw-vzt
August 25, 2025 at 12:52 PM
Reposted by ERCbravenewword
I’d like to share some slides and code for a “Memory Model 101 workshop” I gave recently, which has some minimal examples to illustrate the Rumelhart network & catastrophic interference :)
slides: shorturl.at/q2iKq
code (with colab support!): github.com/qihongl/demo...
May 26, 2025 at 11:56 AM
🎉We're thrilled to welcome Jing Chen, PhD to our team!
She investigates how meanings are encoded and evolve, combining linguistic and computational approaches.
Her work spans diachronic modeling of lexical change in Mandarin and semantic transparency in LLMs.
🔗 research.polyu.edu.hk/en/publicati...
ChiWUG: A Graph-based Evaluation Dataset for Chinese Lexical Semantic Change Detection
research.polyu.edu.hk
July 8, 2025 at 10:54 AM
📢 New paper out! We show that auditory iconicity is not marginal in English: word sounds often resemble real-world sounds. Using neural networks and sound similarity measures, we crack the myth of arbitrariness.
Read more: link.springer.com/article/10.3...

@andreadevarda.bsky.social
Cracking arbitrariness: A data-driven study of auditory iconicity in spoken English - Psychonomic Bulletin & Review
Auditory iconic words display a phonological profile that imitates their referents’ sounds. Traditionally, those words are thought to constitute a minor portion of the auditory lexicon. In this articl...
link.springer.com
July 4, 2025 at 12:16 PM
Reposted by ERCbravenewword
1/n Happy to share a new paper with Calogero Zarbo & Marco Marelli! How well do LLMs represent the implicit meaning of familiar and novel compounds? How do they compare with simpler distributional semantics models (DSMs; i.e., word embeddings)?
doi.org/10.1111/cogs...
Conceptual Combination in Large Language Models: Uncovering Implicit Relational Interpretations in Compound Words With Contextualized Word Embeddings
Large language models (LLMs) have been proposed as candidate models of human semantics, and as such, they must be able to account for conceptual combination. This work explores the ability of two LLM...
doi.org
March 19, 2025 at 2:07 PM
Reposted by ERCbravenewword
1st post here! Excited to share this work with Marelli & @kathyrastle.bsky.social. We've found readers "routinely" combine constituent meanings for Chinese compound meaning, despite variability in constituent meaning and word structure, even when they're not asked to. See threads👇 for more details:
Compositional processing in the recognition of Chinese compounds: Behavioural and computational studies - Psychonomic Bulletin & Review
Recent research has shown that the compositional meaning of a compound is routinely constructed by combining meanings of constituents. However, this body of research has focused primarily on Germanic ...
doi.org
March 10, 2025 at 3:37 PM