Yang Chang
banner
yang1354.bsky.social
Yang Chang
@yang1354.bsky.social
PhD student in Cognitive Neuroscience
My research aims to characterize aesthetics experiences 🖼️ and creativity processes 🎨 in human mind and brain.
#Aesthetics #Creativity
Reposted by Yang Chang
Disconnection between brain regions explains why some people don’t enjoy music. http://dlvr.it/TMMDYD

@cp-trendscognisci.bsky.social
@ub.edu Ernest Mas-Herrero, Robert J. Zatorre
@mcgill.ca Josep Marco-Pallarés
August 7, 2025 at 3:06 PM
Reposted by Yang Chang
🚨New Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715

+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14
Predicting upcoming visual features during eye movements yields scene representations aligned with human visual cortex
Scenes are complex, yet structured collections of parts, including objects and surfaces, that exhibit spatial and semantic relations to one another. An effective visual system therefore needs unified ...
arxiv.org
November 18, 2025 at 12:37 PM
Reposted by Yang Chang
New CNeuroMod-THINGS open-access fMRI dataset: 4 participants · ~4 000 images (720 categories) each shown 3× (12k trials per subject)· individual functional localizers & NSD-inspired QC . Preprint: arxiv.org/abs/2507.09024 Congrats Marie St-Laurent and @martinhebart.bsky.social !!
July 30, 2025 at 1:57 AM
Reposted by Yang Chang
New preprint! 🚨→ Determinants of Visual Ambiguity Resolution. A new work with @ortiztudela.bsky.social @jvoeller.bsky.social @martinhebart.bsky.social and @gonzalezgarcia.bsky.social

We created ~2k images and collected ~100k responses to study visual ambiguity.

www.biorxiv.org/content/10.1...
May 29, 2025 at 12:46 PM
Reposted by Yang Chang
New preprint, w/ @predictivebrain.bsky.social !

we've found that visual cortex, even when just viewing natural scenes, predicts *higher-level* visual features

The aligns with developments in ML, but challenges some assumptions about early sensory cortex

www.biorxiv.org/content/10.1...
Higher-level spatial prediction in natural vision across mouse visual cortex
Theories of predictive processing propose that sensory systems constantly predict incoming signals, based on spatial and temporal context. However, evidence for prediction in sensory cortex largely co...
www.biorxiv.org
May 23, 2025 at 11:39 AM
Reposted by Yang Chang
Thrilled to see our TinyRNN paper in @nature! We show how tiny RNNs predict choices of individual subjects accurately while staying fully interpretable. This approach can transform how we model cognitive processes in both healthy and disordered decisions. doi.org/10.1038/s415...
Discovering cognitive strategies with tiny recurrent neural networks - Nature
Modelling biological decision-making with tiny recurrent neural networks enables more accurate predictions of animal choices than classical cognitive models and offers insights into the underlying cog...
doi.org
July 2, 2025 at 7:03 PM
Reposted by Yang Chang
New preprint! In arxiv.org/abs/2502.20349 “Naturalistic Computational Cognitive Science: Towards generalizable models and theories that capture the full range of natural behavior” we synthesize AI & cognitive science works to a perspective on seeking generalizable understanding of cognition. Thread:
Naturalistic Computational Cognitive Science: Towards generalizable models and theories that capture the full range of natural behavior
Artificial Intelligence increasingly pursues large, complex models that perform many tasks within increasingly realistic domains. How, if at all, should these developments in AI influence cognitive sc...
arxiv.org
February 28, 2025 at 5:14 PM
Reposted by Yang Chang
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.

Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy

arxiv.org/abs/2507.03168
arxiv.org
July 8, 2025 at 1:04 PM
Reposted by Yang Chang
@neocosmliang.bsky.social demonstrating a domain general neural signature that predicts perceived beauty of objects 🌹🥀 in collab with the amazing @martinhebart.bsky.social and @dkaiserlab.bsky.social #THINGS
June 25, 2025 at 12:05 AM
Reposted by Yang Chang
Meet the MIT engineer who invented an AI-powered way to restore art
June 17, 2025 at 12:13 PM
Reposted by Yang Chang
Excited to share this project specifying a research direction I think will be particularly fruitful for theory-driven cognitive science that aims to explain natural behavior!

We're calling this direction "Naturalistic Computational Cognitive Science"
June 16, 2025 at 7:30 PM
Reposted by Yang Chang
1. The evolution of [proto] music. My arguments against the social bonding hypothesis and in favor of the signaling hypothesis 🧪

The Explicandum: The evolution in the human lineage of vocalizations that are:

* Loud
* Synchronized
* Variable

🧵
May 27, 2025 at 10:29 PM
Reposted by Yang Chang
Looking at Van Gogh’s Starry Night, we see not only its content (a French village beneath a night sky) but also its *style*. How does that work? How do we see style?

In @nathumbehav.nature.com, @chazfirestone.bsky.social & I take an experimental approach to style perception! osf.io/preprints/ps...
May 14, 2025 at 4:42 PM
Reposted by Yang Chang
#CAOS2025 starting in Rovereto, with a fantastic talk by Wilma Bainbridge on memorability. Among others,she looked at memorability of paintings in the Art Institute of Chicago, highlighting the role of context, painting size & interestingness on memorability, & that famous pieces are more memorable.
May 8, 2025 at 7:31 AM
Reposted by Yang Chang
People viewing abstract art show more interindividual variability in activity in high-order brain areas than people viewing representative art. According to the authors, abstract art is completed by the viewer, who imbues it with meaning. In PNAS: www.pnas.org/doi/10.1073/...
April 17, 2025 at 6:29 PM
Reposted by Yang Chang
Where a person will look next can be predicted based on how much it costs the brain to move the eyes in that direction.
buff.ly/SXw5LPy
A move you can afford
Where a person will look next can be predicted based on how much it costs the brain to move the eyes in that direction.
buff.ly
April 13, 2025 at 8:01 PM
Reposted by Yang Chang
Creativity as aesthetics in reverse? How does the Mirror Model of Art hold up when seen through the lens of neuroscience? It doesn't.

New paper with Oshin Vartanian, Delaram Farzanfar and Pablo Tinio in Neuropsychologia.
doi.org/10.1016/j.ne...

@uoftpsychology.bsky.social
@uoft.bsky.social
April 11, 2025 at 1:05 AM
Reposted by Yang Chang
Nature research paper: Mastering diverse control tasks through world models

https://go.nature.com/3YigkQB
Mastering diverse control tasks through world models - Nature
A general reinforcement-learning algorithm, called Dreamer, outperforms specialized expert algorithms across diverse tasks by learning a model of the environment and improving its behaviour by imagining future scenarios.
go.nature.com
April 4, 2025 at 12:57 PM
Reposted by Yang Chang
"Our results suggest that art stands as a powerful tool for communicating negative information, that is otherwise costly and unpleasant to engage with... Art may serve as a gateway for staying engaged, and potentially facilitate knowledge, meaningful dialog, and action"

www.pnas.org/doi/10.1073/...
Art promotes exploration of negative content | PNAS
Experiencing negative content through art has a unique power to transform our perceptions and foster engagement. While this idea has been widely di...
www.pnas.org
March 24, 2025 at 7:37 PM
Reposted by Yang Chang
Phenomenal paper here from Quinn and Jones published in Nature Mental Health showing

Group Arts Activitites can have moderate benefits for #anxiety and #depression for older adults.

🎨🖌️🎭👵🏻

www.nature.com/articles/s44...

#creativeageing @naturementalhealth.bsky.social
Group arts interventions for depression and anxiety among older adults: a systematic review and meta-analysis - Nature Mental Health
In this systematic review and meta-analysis of group arts interventions for older adults, the authors found that participation in shared artistic experience was associated with lower levels of depress...
www.nature.com
March 25, 2025 at 7:58 PM
Reposted by Yang Chang
“What an odd thing“ wrote Oliver Sacks “to see an entire species playing with listening to meaningless tonal patterns, preoccupied for much of their time by what they call ‘music’...“. Our new paper, led by ace student @giacomobignardi.bsky.social, unpacks this puzzle from a genetic perspective. 🧪
Twin modelling reveals partly distinct genetic pathways to music enjoyment - Nature Communications
Here, Bignardi et al. report on a study of over 9,000 Swedish twins that indicates the ability to enjoy music is influenced by multiple partly distinct genetic factors.
www.nature.com
March 25, 2025 at 10:07 PM
Reposted by Yang Chang
Happy to share our paper on the #AestheticsToolbox around the #QIP-Machine has been published #OpenAccsess in Behavior Research Methods!

It can can be used to easily and transparently compute a wide range of quantitative image properties for digital images 📸
📄 link.springer.com/article/10.3...
link.springer.com
March 17, 2025 at 4:30 PM
Reposted by Yang Chang
Out today in Nature Machine Intelligence!

From childhood on, people can create novel, playful, and creative goals. Models have yet to capture this ability. We propose a new way to represent goals and report a model that can generate human-like goals in a playful setting... 1/N
February 21, 2025 at 4:29 PM
Reposted by Yang Chang
1/
Our paper is published in Nature Human Behavior!
tinyurl.com/p28jy3bx

Driving and suppressing brain activity in the human language network with model (GPT)-selected stimuli

With A. Sathe, S. Srikant, M. Taliaferro, M. Wang, @mschrimpf.bsky.social, K. Kay, @evfedorenko.bsky.social
January 3, 2024 at 11:57 PM
Reposted by Yang Chang
Characterizing internal models of the visual environment: http://osf.io/wucx8_v1/
February 21, 2025 at 10:39 AM