Saurabh
Saurabh
@saurabhr.bsky.social
Ph.D. in Psychology | Currently on Job Market | Pursuing Consciousness, Reality Monitoring, World Models, Imagination with my life force. saurabhr.github.io
Pinned
World models are a highly speculative topic in AI as well as in cognitive science. I’m excited to share my manuscript on investigating internal world models using imagination networks in humans and LLMs! 🧵1/n

arxiv.org/abs/2510.04391
Internal World Models as Imagination Networks in Cognitive Agents
What is the computational objective of imagination? While classical interpretations suggest imagination is useful for maximizing rewards, recent findings challenge this view. In this study, we propose...
arxiv.org
Reposted by Saurabh
Check out our toolboxes:

1. #Wave_Space, a modular Python tool for simulation and analysis of Traveling Waves: github.com/DugueLab/Wav...

➡️Related publication: www.jneurosci.org/content/45/3...
GitHub - DugueLab/WaveSpace: Python tools for the simulation and analysis of cortical traveling waves
Python tools for the simulation and analysis of cortical traveling waves - DugueLab/WaveSpace
github.com
September 19, 2025 at 10:14 AM
Reposted by Saurabh
Why is "antitrust" doomed to fail?
Because "collusion" is not an action, it's a state.

Why is "regulation" powerless?
Because "malice" is not an intention, it's an emergence.

Why is "human morality" so powerless?
Because the physics never morality.

#Proof_of_Ineffective_Qualia
The Game Theory of How Algorithms Can Drive Up Prices | Quanta Magazine
Recent findings reveal that even simple pricing algorithms can make things more expensive.
www.quantamagazine.org
October 23, 2025 at 3:48 AM
AI can now read words from image, different from contrastive learning where image token are learned with text tokens but rather now AI takes the image and reads the text. Definitely something for subitizing, peripheral vision and crowding studies in AI perceptual science.

youtu.be/YEZHU4LSUfU?...
DeepSeek OCR - More than OCR
YouTube video by Sam Witteveen
youtu.be
October 23, 2025 at 4:08 AM
Reposted by Saurabh
Here is a story about a AAAI paper that got 5 human reviews and 1 AI review. The AI generated review contained a page-long counterexample to a proof in the paper. The AI’s counter example contained hallucinated errors. How does one rebut an AI reviewer? www.linkedin.com/posts/omerbp...
#aaai26 | Omer Ben-Porat | 13 comments
#AAAI26 This year, in addition to five human reviewers, we also received an AI review. The humans sort of liked the paper, so there's hope. The AI, however, took a bolder approach: it confidently d...
www.linkedin.com
October 16, 2025 at 1:19 AM
Reposted by Saurabh
when ppl say 'I didn't think about the color of the ball', did they

(1) create a full, perceptual-like mental image and then forget (or encode) the color, or

(2) really just not think of the color to begin with?

(these options showed up decades ago, but weren't studied empirically)
October 14, 2025 at 1:22 PM
Reposted by Saurabh
New preprint!

"Non-commitment in mental imagery is distinct from perceptual inattention, and supports hierarchical scene construction"

(by Li, Hammond, & me)

link: doi.org/10.31234/osf...

-- the title's a bit of a mouthful, but the nice thing is that it's a pretty decent summary
October 14, 2025 at 1:22 PM
Reposted by Saurabh
New paper in Imaging Neuroscience by Viviana Greco, Penelope A. Lewis, et al:

Disarming emotional memories using targeted memory reactivation during rapid eye movement sleep

doi.org/10.1162/IMAG...
October 12, 2025 at 1:05 AM
Reposted by Saurabh
Consciousness science as a marketplace of rationalizations

my commentary on @smfleming.bsky.social and @matthiasmichel.bsky.social's thought-provoking BBS paper, and more generally about the field.

osf.io/preprints/ps...
OSF
osf.io
October 10, 2025 at 6:05 PM
Reposted by Saurabh
This paper shows that you can predict actual purchase intent (90% accuracy) by asking an off-the-shelf LLM to impersonate a customer with a demographic profile, giving it a product image & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.
October 10, 2025 at 2:32 PM
Reposted by Saurabh
Very glad to see that someone is doing the important work of aligning AI alignment. alignmentalignment.ai
Center for the Alignment of AI Alignment Centers
We align the aligners
alignmentalignment.ai
September 11, 2025 at 1:04 PM
Seeing all the consciousness and non-humans/AI (NhA) theories coming up, I feel like there will be a point of "Consciousness Pascal's Wager".

1/n
October 10, 2025 at 5:07 AM
A fun backstory for my paper: a Blade Runner-style test to distinguish humans from AI using only language. We used network science to probe their imagination and internal world models. This pic is from our first LLM tests.

#AI #BladeRunner #NetworkScience #NLP
October 10, 2025 at 12:13 AM
Reposted by Saurabh
Imagine a brain decoding algorithm that could generalize across different subjects and tasks. Today, we’re one step closer to achieving that vision.

Introducing the flagship paper of our brain decoding program: www.biorxiv.org/content/10.1...
#neuroAI #compneuro @utoronto.ca @uhn.ca
October 7, 2025 at 12:53 PM
Reposted by Saurabh
Are you an early career scholar interested in learning more about peer review?

Join us for our virtual @reviewerzero.bsky.social workshop! We will help you understand how peer review works and give advice on responding to reviewer comments.

9-10:30am PT / 12-1:30pm ET on October 30th. Register👇🏼
Welcome! You are invited to join a meeting: Peer Review 101. After registering, you will receive a confirmation email about joining the meeting.
Welcome! You are invited to join a meeting: Peer Review 101. After registering, you will receive a confirmation email about joining the meeting.
northwestern.zoom.us
October 2, 2025 at 6:40 PM
World models are a highly speculative topic in AI as well as in cognitive science. I’m excited to share my manuscript on investigating internal world models using imagination networks in humans and LLMs! 🧵1/n

arxiv.org/abs/2510.04391
Internal World Models as Imagination Networks in Cognitive Agents
What is the computational objective of imagination? While classical interpretations suggest imagination is useful for maximizing rewards, recent findings challenge this view. In this study, we propose...
arxiv.org
October 7, 2025 at 2:01 PM
Reposted by Saurabh
Imagine an apple 🍎. Is your mental image more like a picture or more like a thought? In a new preprint led by Morgan McCarty—our lab's wonderful RA—we develop a new approach to this old cognitive science question and find that LLMs excel at tasks thought to be solvable only via visual imagery. 🧵
Artificial Phantasia: Evidence for Propositional Reasoning-Based Mental Imagery in Large Language Models
This study offers a novel approach for benchmarking complex cognitive behavior in artificial systems. Almost universally, Large Language Models (LLMs) perform best on tasks which may be included in th...
arxiv.org
October 1, 2025 at 1:27 AM
Reposted by Saurabh
Long time in the making: our preprint of survey study on the diversity with how people seem to experience #mentalimagery. Suggests #aphantasia should be redefined as absence of depictive thought, not merely "not seeing". Some more take home msg:
#psychskysci #neuroscience

doi.org/10.1101/2025...
October 2, 2025 at 6:10 PM
Reposted by Saurabh
Happy to share @c-carrez-corral.bsky.social 1st PhD paper with Pauline Rossel & Carole Peyrin which is now online at link.springer.com/article/10.3... in Attention, Perception & Psychophysics! Thread below👇
Effects of predictions robustness and object-based predictions on subjective visual perception - Attention, Perception, & Psychophysics
Learned regularities about contextual associations between objects and scenes allow us to form predictions about the likely features of the environment, facilitating perception of noisy visual inputs....
link.springer.com
August 28, 2025 at 3:01 PM
Reposted by Saurabh
Tension shapes memory: Computational insights into neural plasticity https://www.biorxiv.org/content/10.1101/2025.08.20.671220v1
August 24, 2025 at 7:15 AM
Reposted by Saurabh
While we're on the subject of coffee, one of the espresso influencer gearheads posted this informative video about why different espresso drinks are called what they're called
August 23, 2025 at 8:19 PM
Reposted by Saurabh
White text on white background instructing LLMs to give positive reviews is apparently now common enough to show up in searches for boilerplate text.
"in 2025 we will have flying cars" 😂😂😂
July 5, 2025 at 7:51 PM
Reposted by Saurabh
Emotion, sensory sensitivity, and metacognition in multisensory integration: evidence from the Sound-Induced Flash Illusion: https://doi.org/10.31234/osf.io/vwg7r_v1
August 18, 2025 at 3:17 PM
Reposted by Saurabh
"The question of whether machines can think... is about as relevant as the question of whether submarines can swim."

-Edsger Dijkstra in 1984, still correct

for my computer science and gamedev people yep he's the pathfinding Dijkstra, whose work A* is a heuristic optimization of we're still using
E.W. Dijkstra Archive: The threats to computing science (EWD898)
www.cs.utexas.edu
August 15, 2025 at 10:47 PM