Davide Cortinovis
davidecortinovis.bsky.social
Davide Cortinovis
@davidecortinovis.bsky.social
PhD student in the Object Vision Group at CIMeC, University of Trento. Interested in neuroimaging and object perception. He/him 🏳️‍🌈
https://davidecortinovis-droid.github.io/
Pinned
New preprint out! We propose that action is a key dimension shaping the topographic organization of object categories in lateral occipitotemporal cortex (LOTC)—and test whether standard and topographic neural networks capture this pattern. A thread:

www.biorxiv.org/content/10.1...

🧵 1/n
Investigating action topography in visual cortex and deep artificial neural networks
High-level visual cortex contains category-selective areas embedded within larger-scale topographic maps like animacy and real-world size. Here, we propose action as a key organizing factor shaping vi...
www.biorxiv.org
Reposted by Davide Cortinovis
Our new preprint on the FOODEEG open dataset is out! EEG recordings and behavioural responses on food cognition tasks for 117 participants will be made publicly available 🧠 @danfeuerriegel.bsky.social @tgro.bsky.social

www.biorxiv.org/content/10.1...
FOODEEG: An open dataset of human electroencephalographic and behavioural responses to food images
Investigating the neurocognitive mechanisms underlying food choices has the potential to advance our understanding of eating behaviour and inform health-targeted interventions and policy. Large, publi...
www.biorxiv.org
November 10, 2025 at 11:34 PM
Reposted by Davide Cortinovis
“Revealing Key Dimensions Underlying the Recognition of Dynamic Human Actions”
New work led by Andre Bockes and Angelika Lingnau - with some small support from me - on dimensions underlying the mental representation of dynamic human actions.

www.nature.com/articles/s44...
Revealing Key Dimensions Underlying the Recognition of Dynamic Human Actions - Communications Psychology
Large-scale similarity ratings of 768 short action videos uncover 28 interpretable dimensions—such as interaction, sport, and craft—offering a framework to quantify and compare human actions.
www.nature.com
October 27, 2025 at 7:23 PM
Reposted by Davide Cortinovis
🚨Preprint: Semantic Tuning of Single Neurons in the Human Medial Temporal Lobe

1/8: How do human neurons encode meaning?
In this work, led by Katharina Karkowski, we recorded hundreds of human MTL neurons to study semantic coding in the human brain:

doi.org/10.1101/2025...
Semantic Tuning of Single Neurons in the Human Medial Temporal Lobe
The Medial Temporal Lobe (MTL) is key to human cognition, supporting memory, emotional processing, navigation, and semantic coding. Rare direct human MTL recordings revealed concept cells, which were ...
doi.org
October 27, 2025 at 3:32 PM
Reposted by Davide Cortinovis
🧠 New preprint: we show that model-guided microstimulation can steer monkey visual behavior.

Paper: arxiv.org/abs/2510.03684

🧵
October 7, 2025 at 3:22 PM
Reposted by Davide Cortinovis
@annabavaresco.bsky.social and
@tlmnhut.bsky.social show: supervised pruning of a DNN’s feature space better aligns with human category representations, selects distinct subspaces for different categories, and more accurately predicts people’s preferences for GenAI images.
doi.org/10.1145/3768...
Modeling Human Concepts with Subspaces in Deep Vision Models | ACM Transactions on Interactive Intelligent Systems
Improving the modeling of human representations of everyday semantic categories, such as animals or food, can lead to better alignment between AI systems and humans. Humans are thought to represent su...
doi.org
September 22, 2025 at 7:28 PM
Reposted by Davide Cortinovis
Functional organization of the human visual system at birth and across late gestation https://www.biorxiv.org/content/10.1101/2025.09.22.677834v1
September 22, 2025 at 11:16 PM
Reposted by Davide Cortinovis
New findings from my lab in Nature Communications suggest that racial stereotypes can lead the brain's perceptual system to temporarily "see" weapons where they don't exist.

Led by: @dongwonoh.bsky.social

Paper: www.nature.com/articles/s41...

(1/6)
September 19, 2025 at 3:19 PM
Reposted by Davide Cortinovis
Reposted by Davide Cortinovis
New in #JNeurosci: People share the same brain responses to different colors, and Bannert and Bartels predicted what color a person is looking at by using the brain activity of others. <a href="https://bsky.app/profile/did:plc:cxqcrpjowzzgwptscc3ncqiq" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky-mention">@unituebingen.bsky.social https://doi.org/10.1523/JNEUROSCI.2717-20.2025
September 8, 2025 at 7:44 PM
Reposted by Davide Cortinovis
🧠 New preprint: Why do deep neural networks predict brain responses so well?
We find a striking dissociation: it’s not shared object recognition. Alignment is driven by sensitivity to texture-like local statistics.
📊 Study: n=57, 624k trials, 5 models doi.org/10.1101/2025...
September 8, 2025 at 6:32 PM
Reposted by Davide Cortinovis
Excited to share my latest work with @jonathanamichaels.bsky.social @diedrichsenjorn.bsky.social & @andpru.bsky.social!
We asked: How does the motor cortex account for arm posture when generating movement?
Paper 👉 www.biorxiv.org/content/10.1...
1/10
Compositional neural dynamics during reaching
The complex mechanics of the arm make the neural control of reaching inherently posture dependent. Because previous reaching studies confound reach direction with final posture, it remains unknown how...
www.biorxiv.org
September 6, 2025 at 1:12 PM
Reposted by Davide Cortinovis
"Assuming that functional specialisation necessarily implies an ‘encapsulated module’ is a widely recognised error even in evolutionary accounts."
Sepehr Razavi, Michael Moutoussis, Peter Dayan, Nichola Raihani, Vaughan Bell & Joseph Barnby, Pseudo-approaches lead to pseudo-explanations: reply to Corlett et al. - PhilPapers
philpapers.org
September 5, 2025 at 5:26 PM
Reposted by Davide Cortinovis
Shared texture-like representations, not global form, underlie deep neural network alignment with human visual processing https://www.biorxiv.org/content/10.1101/2025.08.29.673066v1
September 5, 2025 at 12:15 AM
Reposted by Davide Cortinovis
Can self supervised learning help understand how the brain learns to see the world?

Our latest study, led by Josephine Raugel (FAIR, ENS), is now out:

📄 arxiv.org/pdf/2508.18226
🧵 thread below
September 3, 2025 at 5:18 AM
Reposted by Davide Cortinovis
Our target discussion article out in Cognitive Neuroscience! It will be followed by peer commentary and our responses. If you would like to write a commentary, please reach out to the journal! 1/18 www.tandfonline.com/doi/full/10.... @cibaker.bsky.social @susanwardle.bsky.social
August 29, 2025 at 6:43 PM
Reposted by Davide Cortinovis
Now out in @natneuro.nature.com

What happens to the brain’s body map when a body-part is removed?

Scanning patients before and up to 5 yrs after arm amputation, we discovered the brain’s body map is strikingly preserved despite amputation

www.nature.com/articles/s41593-025-02037-7

🧵1/18
August 21, 2025 at 9:20 AM
Reposted by Davide Cortinovis
Congratulations to Flo Martinez-Addiego and ‪@striemamit.bsky.social‬ for the publication of a cool new paper in PNAS showing that high-level actions like tool use generalize between hand and foot, even in individuals born without hands. www.pnas.org/doi/10.1073/...
Action-type mapping principles extend beyond evolutionarily conserved actions, even in people born without hands | PNAS
How are actions represented in the motor system? Although the sensorimotor system is broadly organized somatotopically, higher-level sensorimotor a...
www.pnas.org
August 20, 2025 at 3:30 PM
Reposted by Davide Cortinovis
Our new paper, “A neural compass in the human brain during naturalistic human navigation” is out in @sfnjournals.bsky.social! First-author @zhenganglu.bsky.social led the charge, with Josh Julian and collaborator @gkaguirre.com.

www.jneurosci.org/content/earl...
August 19, 2025 at 8:29 PM
Reposted by Davide Cortinovis
On the left is a rabbit. On the right is an elephant. But guess what: They’re the *same image*, rotated 90°!

In @currentbiology.bsky.social, @chazfirestone.bsky.social & I show how these images—known as “visual anagrams”—can help solve a longstanding problem in cognitive science. bit.ly/45BVnCZ
August 19, 2025 at 4:32 PM
Reposted by Davide Cortinovis
Presented my study a few days ago at a minitalk during the Cognitive Science Arena #CSA in Brixen. It was fun :)
#VisionScience
#ObjectPerception
February 16, 2025 at 6:09 PM
Reposted by Davide Cortinovis
At #EWCN I presented my poster. Preliminary results from 2 #fMRI experiments (2nd ongoing) suggest scene clutter may influences size & animacy along the ventral stream—with animacy staying robust after mid-level controls. Feedback welcome!

#VisionNeuroscience #CognitiveNeuroscience #VisionScience
February 3, 2025 at 2:10 PM
Reposted by Davide Cortinovis
🚨 Preprint alert! Excited to share my second PhD project: “Adopting a human developmental visual diet yields robust, shape-based AI vision” -- a nice case showing that biology, neuroscience, and psychology can still help AI :)! arxiv.org/abs/2507.03168
July 8, 2025 at 1:09 PM
Interested in category selectivity and topographic modelling? Come see my poster tomorrow at CCN (A57). We show that encoding models confirm dissociable selective responses to bodies, hands, and tools, and test if topographic ANNs capture that organization.
See you there!
August 11, 2025 at 3:26 PM
Reposted by Davide Cortinovis
New lab preprint, led by @tlmnhut.bsky.social. We show that certain topographic CNNs offer computational advantages, including greater weight matrix robustness, better handling of OOD noisy data, and higher entropy of unit activation.
arxiv.org/abs/2508.00043
Improved Robustness and Functional Localization in Topographic CNNs Through Weight Similarity
Topographic neural networks are computational models that can simulate the spatial and functional organization of the brain. Topographic constraints in neural networks can be implemented in multiple w...
arxiv.org
August 7, 2025 at 8:21 PM