Marco Ciapparelli
marcociapparelli.bsky.social
Marco Ciapparelli
@marcociapparelli.bsky.social
Postdoc in psychology and cognitive neuroscience mainly interested in conceptual combination, semantic memory and computational modeling.
https://marcociapparelli.github.io/
Compare concept representations across modalities in unimodal models, using the AlexNet convolutional neural network to represent images and an LLM to represent their captions
July 18, 2025 at 1:40 PM
Perform representational similarity analysis to compare how the same concepts are represented across languages (in their correponding monolingual models) and in different layers of LLMs
July 18, 2025 at 1:40 PM
Replace words with sense-appropriate and sense-inappropriate alternatives in the WiC annotated dataset and look at the effects of context-word interaction on embeddings and surprisal
July 18, 2025 at 1:40 PM
Extract word embeddings from BERT and inspect how context can modulate their representation. For example, what happens to "fruitless" when we place it in a sentence that points to its typical metaphorical meaning ("vain") as opposed to one where its meaning is literal ("without fruits")?
July 18, 2025 at 1:40 PM
I'm sharing a Colab notebook on using large language models for cognitive science! GitHub repo: github.com/MarcoCiappar...

It's geared toward psychologists & linguists and covers extracting embeddings, predictability measures, comparing models across languages & modalities (vision). see examples 🧵
July 18, 2025 at 1:40 PM
5/7 We conduct confirmatory RSA in four ROIs for which we have a priori hypotheses of ROI-model correspondence (based on what we know of composition in models and what has been claimed of composition in ROIs), and searchlight RSAs in the general semantic network.
April 28, 2025 at 12:33 PM
10/n Expectedly, LLMs vastly outperform DSMs on familiar compounds. Yet, unlike DSMs, LLM performance on novel compounds drops considerably. In fact, looking at novel compounds, some DSMs outperform the best layer of BERT and Llama! (image shows model fit; the lower the better).
March 19, 2025 at 2:07 PM
7/n So, we measured the cosine similarity between the CWEs of “snowman” in its original and paraphrased forms and took the result as an estimate of the interpretation's plausibility. In this way, changes in cosine similarity can be more directly attributed to target relations.
March 19, 2025 at 2:07 PM