johnwkrakauer
@johnwkrakauer.bsky.social
Reposted by johnwkrakauer
New Pre-Print:
www.biorxiv.org/cgi/content/...
We’re all familiar with having to practice a new skill to get better at it, but what really happens during practice? The answer, I propose, is reinforcement learning - specifically policy-gradient reinforcement learning.
Overview 🧵 below...
www.biorxiv.org/cgi/content/...
We’re all familiar with having to practice a new skill to get better at it, but what really happens during practice? The answer, I propose, is reinforcement learning - specifically policy-gradient reinforcement learning.
Overview 🧵 below...
Policy-Gradient Reinforcement Learning as a General Theory of Practice-Based Motor Skill Learning
Mastering any new skill requires extensive practice, but the computational principles underlying this learning are not clearly understood. Existing theories of motor learning can explain short-term ad...
www.biorxiv.org
October 20, 2025 at 2:58 PM
New Pre-Print:
www.biorxiv.org/cgi/content/...
We’re all familiar with having to practice a new skill to get better at it, but what really happens during practice? The answer, I propose, is reinforcement learning - specifically policy-gradient reinforcement learning.
Overview 🧵 below...
www.biorxiv.org/cgi/content/...
We’re all familiar with having to practice a new skill to get better at it, but what really happens during practice? The answer, I propose, is reinforcement learning - specifically policy-gradient reinforcement learning.
Overview 🧵 below...
New preprint written with the wonderful philosopher William Ramsey: Mental Representation Without Neural Representation: Understanding The Evidence osf.io/preprints/ps...
OSF
osf.io
October 18, 2025 at 8:18 AM
New preprint written with the wonderful philosopher William Ramsey: Mental Representation Without Neural Representation: Understanding The Evidence osf.io/preprints/ps...
First shot across the bow from ongoing project with Jake.
New publication forthcoming in BBS, co-authored with John Krakauer: a commentary on @smfleming.bsky.social & @matthiasmichel.bsky.social's groundbreaking target article.
We critique widespread assumptions in cognitive neuroscience about the role of internal models in implicit cognition. (1/7)
We critique widespread assumptions in cognitive neuroscience about the role of internal models in implicit cognition. (1/7)
September 22, 2025 at 7:57 PM
First shot across the bow from ongoing project with Jake.
Reposted by johnwkrakauer
@benhayden.bsky.social
@tyrellturing.bsky.social
@jmgrohneuro.bsky.social
@pessoabrain.bsky.social
I see a lot of talk on here about how we should avoid
"x does y" talk because the brain is "a dynamic, reverberant, reciprocally interconnected system".
But this does not follow.
A thread...
@tyrellturing.bsky.social
@jmgrohneuro.bsky.social
@pessoabrain.bsky.social
I see a lot of talk on here about how we should avoid
"x does y" talk because the brain is "a dynamic, reverberant, reciprocally interconnected system".
But this does not follow.
A thread...
September 5, 2025 at 9:57 PM
@benhayden.bsky.social
@tyrellturing.bsky.social
@jmgrohneuro.bsky.social
@pessoabrain.bsky.social
I see a lot of talk on here about how we should avoid
"x does y" talk because the brain is "a dynamic, reverberant, reciprocally interconnected system".
But this does not follow.
A thread...
@tyrellturing.bsky.social
@jmgrohneuro.bsky.social
@pessoabrain.bsky.social
I see a lot of talk on here about how we should avoid
"x does y" talk because the brain is "a dynamic, reverberant, reciprocally interconnected system".
But this does not follow.
A thread...
Excited to share this new work
A spinal origin for the obligate flexor synergy in the non-human primate: Implications for control of reaching https://www.biorxiv.org/content/10.1101/2025.07.28.666086v1
August 2, 2025 at 7:50 AM
Excited to share this new work
Reposted by johnwkrakauer
Terrific podcast relevant to our debates here about “What is an emotion?” But in the case of emotion, it’s turned up to 11 b/c (unlike “representation”), everyone alive has intuition and interest about the answers (including the public).
www.thetransmitter.org/brain-inspir...
www.thetransmitter.org/brain-inspir...
What do neuroscientists mean by the term representation?
A group of neuroscientists and philosophers discuss the use and misuse of the term “representation” across the cognitive sciences.
www.thetransmitter.org
June 4, 2025 at 11:33 AM
Terrific podcast relevant to our debates here about “What is an emotion?” But in the case of emotion, it’s turned up to 11 b/c (unlike “representation”), everyone alive has intuition and interest about the answers (including the public).
www.thetransmitter.org/brain-inspir...
www.thetransmitter.org/brain-inspir...
Reposted by johnwkrakauer
Great interview with Hasok Chang on 'Epistemic Iteration':
The idea that we don't often start scientific inquiries from a solid foundation. We knowingly start from an imperfect position, and use the outcomes to refine and correct the original starting point.
open.spotify.com/episode/6tbT...
The idea that we don't often start scientific inquiries from a solid foundation. We knowingly start from an imperfect position, and use the outcomes to refine and correct the original starting point.
open.spotify.com/episode/6tbT...
Audience Faves: Hasok Chang on 'Epistemic Iteration'
The HPS Podcast - Conversations from History, Philosophy and Social Studies of Science · Episode
open.spotify.com
May 9, 2025 at 3:02 PM
Great interview with Hasok Chang on 'Epistemic Iteration':
The idea that we don't often start scientific inquiries from a solid foundation. We knowingly start from an imperfect position, and use the outcomes to refine and correct the original starting point.
open.spotify.com/episode/6tbT...
The idea that we don't often start scientific inquiries from a solid foundation. We knowingly start from an imperfect position, and use the outcomes to refine and correct the original starting point.
open.spotify.com/episode/6tbT...
Reposted by johnwkrakauer
...it basically confirmed what is already well-established: LLMs (& LRMs & "LLM agents") have trouble w/ problems that require many steps of reasoning/planning.
See, e.g., lots of recent papers by Subbarao Kambhampati's group at ASU. (2/2)
See, e.g., lots of recent papers by Subbarao Kambhampati's group at ASU. (2/2)
June 9, 2025 at 10:53 PM
...it basically confirmed what is already well-established: LLMs (& LRMs & "LLM agents") have trouble w/ problems that require many steps of reasoning/planning.
See, e.g., lots of recent papers by Subbarao Kambhampati's group at ASU. (2/2)
See, e.g., lots of recent papers by Subbarao Kambhampati's group at ASU. (2/2)
It was fun working on this with David and Melanie.
New paper: "Large Language Models and Emergence: A Complex Systems Perspective" (D. Krakauer, J. Krakauer, M. Mitchell).
We look at claims of "emergent capabilities" & "emergent intelligence" in LLMs from the perspective of what emergence means in complexity science.
arxiv.org/pdf/2506.11135
We look at claims of "emergent capabilities" & "emergent intelligence" in LLMs from the perspective of what emergence means in complexity science.
arxiv.org/pdf/2506.11135
arxiv.org
June 19, 2025 at 8:22 AM
It was fun working on this with David and Melanie.