Saurabh
@saurabhr.bsky.social
Ph.D. in Psychology | Currently on Job Market | Pursuing Consciousness, Reality Monitoring, World Models, Imagination with my life force. saurabhr.github.io
Reposted by Saurabh
Check out our toolboxes:
1. #Wave_Space, a modular Python tool for simulation and analysis of Traveling Waves: github.com/DugueLab/Wav...
➡️Related publication: www.jneurosci.org/content/45/3...
1. #Wave_Space, a modular Python tool for simulation and analysis of Traveling Waves: github.com/DugueLab/Wav...
➡️Related publication: www.jneurosci.org/content/45/3...
GitHub - DugueLab/WaveSpace: Python tools for the simulation and analysis of cortical traveling waves
Python tools for the simulation and analysis of cortical traveling waves - DugueLab/WaveSpace
github.com
September 19, 2025 at 10:14 AM
Check out our toolboxes:
1. #Wave_Space, a modular Python tool for simulation and analysis of Traveling Waves: github.com/DugueLab/Wav...
➡️Related publication: www.jneurosci.org/content/45/3...
1. #Wave_Space, a modular Python tool for simulation and analysis of Traveling Waves: github.com/DugueLab/Wav...
➡️Related publication: www.jneurosci.org/content/45/3...
Reposted by Saurabh
when ppl say 'I didn't think about the color of the ball', did they
(1) create a full, perceptual-like mental image and then forget (or encode) the color, or
(2) really just not think of the color to begin with?
(these options showed up decades ago, but weren't studied empirically)
(1) create a full, perceptual-like mental image and then forget (or encode) the color, or
(2) really just not think of the color to begin with?
(these options showed up decades ago, but weren't studied empirically)
October 14, 2025 at 1:22 PM
when ppl say 'I didn't think about the color of the ball', did they
(1) create a full, perceptual-like mental image and then forget (or encode) the color, or
(2) really just not think of the color to begin with?
(these options showed up decades ago, but weren't studied empirically)
(1) create a full, perceptual-like mental image and then forget (or encode) the color, or
(2) really just not think of the color to begin with?
(these options showed up decades ago, but weren't studied empirically)
Whereas believing Consciousness doesn't exist risks infinite loss (lack of humanity) if Consciousness does exist."
3/End-of-Post
3/End-of-Post
October 10, 2025 at 5:09 AM
Whereas believing Consciousness doesn't exist risks infinite loss (lack of humanity) if Consciousness does exist."
3/End-of-Post
3/End-of-Post
"Betting on Consciousness's existence in NhA because believing in Consciousness offers potentially infinite gains (like humanity) and minimal losses if Consciousness doesn't exist, ...
2/n
2/n
October 10, 2025 at 5:08 AM
"Betting on Consciousness's existence in NhA because believing in Consciousness offers potentially infinite gains (like humanity) and minimal losses if Consciousness doesn't exist, ...
2/n
2/n
Keep watching this space for more cool stuff in the upcoming weeks!!
October 7, 2025 at 2:07 PM
Keep watching this space for more cool stuff in the upcoming weeks!!
These structural difference confirms that human and LLM agents possess distinct internal world models. Despite their linguistic capacity, LLMs lack the phenomenological structures reflected in human minds.
October 7, 2025 at 2:07 PM
These structural difference confirms that human and LLM agents possess distinct internal world models. Despite their linguistic capacity, LLMs lack the phenomenological structures reflected in human minds.
2. Clustering Alignment: LLM imagination networks often lacked the characteristic clustering seen in human data, frequently collapsing into only a single cluster, and lacked clustering alignment with humans. 🧵6/n
October 7, 2025 at 2:06 PM
2. Clustering Alignment: LLM imagination networks often lacked the characteristic clustering seen in human data, frequently collapsing into only a single cluster, and lacked clustering alignment with humans. 🧵6/n
But LLMs? They demonstrate a fundamental structural failure:
1. Inconsistent Importance: LLM centrality correlations with humans were inconsistent and rarely survived statistical corrections 🧵5/n
1. Inconsistent Importance: LLM centrality correlations with humans were inconsistent and rarely survived statistical corrections 🧵5/n
October 7, 2025 at 2:05 PM
But LLMs? They demonstrate a fundamental structural failure:
1. Inconsistent Importance: LLM centrality correlations with humans were inconsistent and rarely survived statistical corrections 🧵5/n
1. Inconsistent Importance: LLM centrality correlations with humans were inconsistent and rarely survived statistical corrections 🧵5/n
My results showed that human IWMs were consistently organized, exhibiting highly significant correlations across local (Expected Influence, Strength) and global (Closeness) centrality measures. This suggests a general property of how IWMs are structured across human populations. 🧵4/n
October 7, 2025 at 2:05 PM
My results showed that human IWMs were consistently organized, exhibiting highly significant correlations across local (Expected Influence, Strength) and global (Closeness) centrality measures. This suggests a general property of how IWMs are structured across human populations. 🧵4/n
In this paper, we utilized imagination vividness ratings and network analysis to measure the properties of internal world models in natural and artificial cognitive agents.
(first three columns from left in the pic are imagination networks for VVIQ-2, next three columns for PSIQ) 🧵3/n
(first three columns from left in the pic are imagination networks for VVIQ-2, next three columns for PSIQ) 🧵3/n
October 7, 2025 at 2:03 PM
In this paper, we utilized imagination vividness ratings and network analysis to measure the properties of internal world models in natural and artificial cognitive agents.
(first three columns from left in the pic are imagination networks for VVIQ-2, next three columns for PSIQ) 🧵3/n
(first three columns from left in the pic are imagination networks for VVIQ-2, next three columns for PSIQ) 🧵3/n
The study was based on the idea that imagination may be involved in accessing internal world models, a concept previously proposed by leading AI researchers, such as Yutaka Matsuo and Yann LeCun. 🧵2/n
October 7, 2025 at 2:02 PM
The study was based on the idea that imagination may be involved in accessing internal world models, a concept previously proposed by leading AI researchers, such as Yutaka Matsuo and Yann LeCun. 🧵2/n