Shawn Manuel
banner
shwnmnl.bsky.social
Shawn Manuel
@shwnmnl.bsky.social
ψ/AI PhD Student @ Université de Montréal || Hacker || Metacognizer || Exploring qualia space computationally
shwnmnl.github.io
Pinned
🚀 Thrilled to share my first first-author paper, published in PCN !

We explore how our unique subjective experiences of the world affect mental health using a combination of psychometrics, NLP and genAI.

🔗 Read it here: doi.org/10.1111/pcn....

🧵👇
Towards a latent space cartography of subjective experience in mental health
Aims The way that individuals subjectively experience the world greatly influences their own mental well-being. However, it remains a considerable challenge to precisely characterize the breadth and...
doi.org
loved this incredibly lucid conversation with @meganakpeters.bsky.social and @laurennross.bsky.social about the need for synergy between phil/sci, esp wrt consciousness

🧵of my highlights
Why Science and Philosophy Need Each Other | Lauren Ross & Megan Peters
Spotify video
open.spotify.com
October 24, 2025 at 12:18 PM
Reposted by Shawn Manuel
LLMs have convincingly demonstrated that coding is the easiest activity, maths is medium hard, and having taste is the hardest
October 15, 2025 at 6:10 AM
some obvious shortcomings lead many to think that LLMs can’t be useful thought companions, but they are outweighed by the benefits of having an infinite sounding board

the outputs shouldn’t replace your own thoughts, but help to refine them

open.substack.com/pub/shifting...
September 1, 2025 at 6:33 PM
Reposted by Shawn Manuel
How reliable is what an AI says about itself? The answer depends on whether models can introspect. But, if an LLM says its temperature parameter is high (and it is!)….does that mean it’s introspecting? Surprisingly tricky to pin down. Our paper: arxiv.org/abs/2508.14802 (1/n)
August 26, 2025 at 3:00 PM
Reposted by Shawn Manuel
i do think people don't realize that gen AI systems are not introspecting to explain their own behavior. they're giving you output based on their training data, which certainly includes information about how they work, but not why they took certain specific actions
no it wasn’t and no it didn’t
July 23, 2025 at 6:46 AM
Reposted by Shawn Manuel
"Understanding" is a pretty beautiful compound word...in this case "under" has a somewhat archaic meaning of "among" (also still present in "under these circumstances.")

So understanding is "Standing among" as-in a mind that figuratively shares space with the concepts in question.
July 23, 2025 at 5:58 PM
“if you study glial cells are you an astroscientist?” – @dariusliutas.bsky.social
#neuroskyence
July 20, 2025 at 9:25 PM
a slept on aspect of making new friends is retelling your stories, getting to know/weave yourself again
July 20, 2025 at 9:22 PM
“the most important words are the ones you understand” – a friend I met in Greece
July 20, 2025 at 9:22 PM
a distinction i miss in french is between “trust” and “confidence”, both of which get folded into “confiance”
July 20, 2025 at 9:19 PM
Reposted by Shawn Manuel
Check out our take on Chain-of-Thought.
I really like this paper as a survey on the current literature on what CoT is, but more importantly on what it's not.
It also serves as a cautionary tale to the (apparently quite common) misuse of CoT as an interpretable method.
Excited to share our paper: "Chain-of-Thought Is Not Explainability"! We unpack a critical misconception in AI: models explaining their steps (CoT) aren't necessarily revealing their true reasoning. Spoiler: the transparency can be an illusion. (1/9) 🧵
July 1, 2025 at 5:45 PM
Reposted by Shawn Manuel
We @smfleming.bsky.social, Marion Rouault and @seowxft.bsky.social and I) have posted a reply osf.io/preprints/ps... to a preprint that recently raised concerns about the validity of associations between mental health and metacognition from online studies. I hope you can take the time to read it.
OSF
osf.io
July 1, 2025 at 8:52 AM
Reposted by Shawn Manuel
Finally a brain study with a network that makes sense to me.
neuroscience of burrito
June 30, 2025 at 11:21 PM
Reposted by Shawn Manuel
Does linguistic diversity when talking about emotions track wellbeing? In a new pre-print with @eriknook.bsky.social, we link emotion vocabularies to mental health in a large real-world psychotherapy dataset. Highlight: therapist emo vocab may help clients get better over time! shorturl.at/TKPN1
Large natural emotion vocabularies are linked with better mental health in psychotherapeutic conversations
Psychotherapy is the most ubiquitous form of mental health treatment and it unfolds predominantly through language. To better understand how this exchange of words bolsters mental health, we tested ho...
shorturl.at
June 26, 2025 at 5:24 PM
Prediction: we will decode birdsong well enough to “communicate” with them during my lifetime. Maybe whales too.
June 26, 2025 at 2:29 PM
Reposted by Shawn Manuel
I put up an old essay I wrote for a history and #philosophy of #science course on PsyArXiv. It's an overview/exploration of psychosomatic syndromes through the last 150 years or so. I think it still holds up despite writing it back in undergrad!

doi.org/10.31234/osf...

#psychology
OSF
doi.org
June 25, 2025 at 3:46 PM
Just wrapped up my first real foray into analyzing brain data at Brainhack School 2025 a couple weeks ago 🧠💻

I focused on comparing fMRI techniques on a single subject, using fear as a case study.

school-brainhack.github.io/project/many...
The Many Faces of Fear: Univariate, Predictive and Representational …
Brainhack School
school-brainhack.github.io
June 25, 2025 at 1:55 AM
Reposted by Shawn Manuel
Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.
June 19, 2025 at 11:21 AM
PSA: You can't use 'ChatGPT' as a blanket term for all models and expect the same response quality.

This applies to everyday use and even more so to academic or scientific contexts.

As a self-described AI power-user, here’s my two-cents in a quick (6min) video demonstration.

youtu.be/JVPonpG6hkM
Not all LLMs are trustworthy thought-companions.
YouTube video by Shawn Manuel
youtu.be
June 19, 2025 at 2:38 PM
Reposted by Shawn Manuel
Terrific podcast relevant to our debates here about “What is an emotion?” But in the case of emotion, it’s turned up to 11 b/c (unlike “representation”), everyone alive has intuition and interest about the answers (including the public).

www.thetransmitter.org/brain-inspir...
What do neuroscientists mean by the term representation?
A group of neuroscientists and philosophers discuss the use and misuse of the term “representation” across the cognitive sciences.
www.thetransmitter.org
June 4, 2025 at 11:33 AM
Reposted by Shawn Manuel
"no no, explain to me even dumber" is a funny dynamic with LLMs but it's nothing new for people writing/reading academic papers.

(looking at a paper): "Ugh, can you maybe give me an 'abstract'? Still too long...maybe just 'highlights'...? Maybe a 'public interest statement'...?
May 23, 2025 at 7:22 PM
drawing out my “phenomenology first” view

to be explanatory and useful, any claim ultimately has to cache out in terms consistent with our individual and collective subjective experiences

🧵
May 13, 2025 at 9:38 PM
Reposted by Shawn Manuel
I've felt for a while that a mainstream method, reverse engineering, in cognitive science & AI is incompatible w computationalism‼️ So I wrote "Modern Alchemy: Neurocognitive Reverse Engineering" w the wonderful Natalia S. & @irisvanrooij.bsky.social to elaborate: philsci-archive.pitt.edu/25289/
1/n
May 13, 2025 at 6:29 PM
Reposted by Shawn Manuel
#INTELLIGENCEARTIFICIELLE | 🧠 Des chercheurs ont utilisé des réseaux neuronaux artificiels pour prédire certains symptômes liés à la santé mentale.

Recherche menée par @shwnmnl.bsky.social, @vincenttd.bsky.social, Jean Gagnon et Frédéric Gosselin. 

#SantéMentale #Psychologie
Et si l’IA devenait un outil de dépistage psychologique?
Des chercheurs se sont servis de réseaux neuronaux artificiels pour prédire si des personnes présentaient des symptômes souvent observés dans la dépression, l'anxiété ou la schizophrénie.
tr.ee
April 29, 2025 at 6:31 PM