Douglas Guilbeault
@douglasguilbeault.bsky.social
Assistant Prof. in Org. Behavior @StanfordGSB | Computational Culture Lab http://comp-culture.org | Social Networks, Cognition, Cultural Evolution, AI
Very interesting! Thanks for sharing. I'd love to pick up on this thread when our paths cross (hopefully sooner than later!)
October 31, 2025 at 9:39 PM
Very interesting! Thanks for sharing. I'd love to pick up on this thread when our paths cross (hopefully sooner than later!)
Thanks Laura! Very interesting comment. I agree that variance is an important and underrepresented angle in this discourse. What are some of the valuable questions and/or insights that you think can from shifting focus toward variance?
October 30, 2025 at 11:00 PM
Thanks Laura! Very interesting comment. I agree that variance is an important and underrepresented angle in this discourse. What are some of the valuable questions and/or insights that you think can from shifting focus toward variance?
The tldr of our study is captured in this short 3 minute video produced by Emma Richard:
www.youtube.com/watch?v=4Vdw...
www.youtube.com/watch?v=4Vdw...
Age and Gender Distortion in Online Media and Large Language Models
YouTube video by Douglas Guilbeault
www.youtube.com
October 8, 2025 at 3:34 PM
The tldr of our study is captured in this short 3 minute video produced by Emma Richard:
www.youtube.com/watch?v=4Vdw...
www.youtube.com/watch?v=4Vdw...
An interesting direction for future research will be to explore whether (and ideally how) incorporating multimodal representations into LLMs shapes their capacity to emulate human embodied metaphorical reasoning, especially about metaphors relating to vision.
June 12, 2025 at 3:10 AM
An interesting direction for future research will be to explore whether (and ideally how) incorporating multimodal representations into LLMs shapes their capacity to emulate human embodied metaphorical reasoning, especially about metaphors relating to vision.
This work also connects to our prior work demonstrating systematic differences in color perception btwn vision classifiers (based on deep neural networks & transformers) and humans. www.sciencedirect.com/science/arti...
Divergences in color perception between deep neural networks and humans
Deep neural networks (DNNs) are increasingly proposed as models of human vision, bolstered by their impressive performance on image classification and…
www.sciencedirect.com
June 12, 2025 at 3:10 AM
This work also connects to our prior work demonstrating systematic differences in color perception btwn vision classifiers (based on deep neural networks & transformers) and humans. www.sciencedirect.com/science/arti...
This work nicely complements a paper published last week showing that text-based LLMs struggle to recover sensorimotor aspects of human concepts: www.nature.com/articles/s41...
Large language models without grounding recover non-sensorimotor but not sensorimotor features of human concepts - Nature Human Behaviour
Xu et al. find that large language models not only align with human representations in non-sensorimotor domains but also diverge in sensorimotor ones, with additional visual training associated with e...
www.nature.com
June 12, 2025 at 3:10 AM
This work nicely complements a paper published last week showing that text-based LLMs struggle to recover sensorimotor aspects of human concepts: www.nature.com/articles/s41...
Huge kudos to Ethan Nadler for leading this effort, and for our fantastic global team of collaborators: Sofronia Ringold, Tom Williamson, Antoine Pepin, Iulia-Maria Comşa, Karim Jerbi, Srini Narayanan, and Lisa Aziz-Zadeh.
June 12, 2025 at 3:10 AM
Huge kudos to Ethan Nadler for leading this effort, and for our fantastic global team of collaborators: Sofronia Ringold, Tom Williamson, Antoine Pepin, Iulia-Maria Comşa, Karim Jerbi, Srini Narayanan, and Lisa Aziz-Zadeh.
This was a massive interdisciplinary effort – including physicists, neuroscientists, social scientists, and AI researchers from @Google DeepMind.
June 12, 2025 at 3:10 AM
This was a massive interdisciplinary effort – including physicists, neuroscientists, social scientists, and AI researchers from @Google DeepMind.
This has implications for the ongoing conversation around whether and to what extent LLMs can be treated as meaningful emulators of human cognition for psychological and/or sociological studies.
June 12, 2025 at 3:10 AM
This has implications for the ongoing conversation around whether and to what extent LLMs can be treated as meaningful emulators of human cognition for psychological and/or sociological studies.
This suggests perceptual experience plays a role in metaphorical reasoning. It further suggests that LLMs are limited in their ability to recover the embodied aspects of metaphorical reasoning from statistical correlations among words alone.
June 12, 2025 at 3:10 AM
This suggests perceptual experience plays a role in metaphorical reasoning. It further suggests that LLMs are limited in their ability to recover the embodied aspects of metaphorical reasoning from statistical correlations among words alone.
We show that LLMs’ struggle to reason coherently about novel color metaphors and are less likely to reference embodied experience in their interpretations. By contrast, painters exhibit the highest rate of embodied reasoning when interpreting novel color metaphors.
June 12, 2025 at 3:10 AM
We show that LLMs’ struggle to reason coherently about novel color metaphors and are less likely to reference embodied experience in their interpretations. By contrast, painters exhibit the highest rate of embodied reasoning when interpreting novel color metaphors.
Thank you for your interest! We welcome any and all feedback.
April 3, 2025 at 11:38 PM
Thank you for your interest! We welcome any and all feedback.