Jake Browning
banner
jake-browning.bsky.social
Jake Browning
@jake-browning.bsky.social
Philosophy of AI and Mind, but with a historical bent. Baruch College.
My dog is better than your dog.
https://www.jacob-browning.com/
Reposted by Jake Browning
October 31, 2025 at 4:22 PM
Reposted by Jake Browning
favorite thing I've written
October 27, 2025 at 6:33 PM
Reposted by Jake Browning
“The deskilling, denigration, and displacement of teachers and scholars have historically been central to fascist takeovers, since educators serve as bulwarks against propaganda, anti-intellectualism, and illiteracy.” — @olivia.science

www.project-syndicate.org/commentary/a...
AI Is Hollowing Out Higher Education
Olivia Guest & Iris van Rooij urge teachers and scholars to reject tools that commodify learning, deskill students, and promote illiteracy.
www.project-syndicate.org
October 17, 2025 at 10:59 PM
I know someone accused JJ Gibson of relying on "magical tissue" to explain for vision worked. Who was that and where did they say it?
October 24, 2025 at 1:42 PM
Reposted by Jake Browning
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory. www.bbc.co.uk/mediacentre/...
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC
www.bbc.co.uk
October 23, 2025 at 5:17 PM
Reposted by Jake Browning
Can AI simulations of human research participants advance cognitive science? In @cp-trendscognsci.bsky.social, @lmesseri.bsky.social & I analyze this vision. We show how “AI Surrogates” entrench practices that limit the generalizability of cognitive science while aspiring to do the opposite. 1/
AI Surrogates and illusions of generalizability in cognitive science
Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new know…
www.sciencedirect.com
October 21, 2025 at 8:24 PM
Reposted by Jake Browning
Evidence that even when LLMs produce similar results to humans, they “rely on lexical associations and statistical priors rather than contextual reasoning or normative criteria. We term this divergence epistemia: the illusion of knowledge emerging when surface plausibility replaces verification”
PNAS
Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...
www.pnas.org
October 17, 2025 at 7:40 AM
Reposted by Jake Browning
Happy to share that our BBS target article has been accepted: “Core Perception”: Re-imagining Precocious Reasoning as Sophisticated Perceiving
With Alon Hafri, @veroniqueizard.bsky.social, @chazfirestone.bsky.social & Brent Strickland
Read it here: doi.org/10.1017/S014...
A short thread [1/5]👇
October 9, 2025 at 3:51 PM
Reposted by Jake Browning
This is a big one! A 4-year writing project over many timezones, arguing for a reimagining of the influential "core knowledge" thesis.

Led by @daweibai.bsky.social, we argue that much of our innate knowledge of the world is not "conceptual" in nature, but rather wired into perceptual processing. 👇
October 9, 2025 at 4:31 PM
Reposted by Jake Browning
Do AI reasoning models abstract and reason like humans?

New paper on this from my group:

arxiv.org/abs/2510.02125

🧵 1/10
Do AI Models Perform Human-like Abstract Reasoning Across Modalities?
OpenAI's o3-preview reasoning model exceeded human accuracy on the ARC-AGI benchmark, but does that mean state-of-the-art models recognize and reason with the abstractions that the task creators inten...
arxiv.org
October 6, 2025 at 9:27 PM
Reposted by Jake Browning
Had missed this absolutely brilliant paper. They take a widely used social media addiction scale & replace 'social media' with 'friends'. The resulting scale has great psychometric properties & 69% of people have friend addictions.

link.springer.com/article/10.3...
Development of an Offline-Friend Addiction Questionnaire (O-FAQ): Are most people really social addicts? - Behavior Research Methods
A growing number of self-report measures aim to define interactions with social media in a pathological behavior framework, often using terminology focused on identifying those who are ‘addicted’ to engaging with others online. Specifically, measures of ‘social media addiction’ focus on motivations for online social information seeking, which could relate to motivations for offline social information seeking. However, it could be the case that these same measures could reveal a pattern of friend addiction in general. This study develops the Offline-Friend Addiction Questionnaire (O-FAQ) by re-wording items from highly cited pathological social media use scales to reflect “spending time with friends”. Our methodology for validation follows the current literature precedent in the development of social media ‘addiction’ scales. The O-FAQ had a three-factor solution in an exploratory sample of N = 807 and these factors were stable in a 4-week retest (r = .72 to .86) and was validated against personality traits, and risk-taking behavior, in conceptually plausible directions. Using the same polythetic classification techniques as pathological social media use studies, we were able to classify 69% of our sample as addicted to spending time with their friends. The discussion of our satirical research is a critical reflection on the role of measurement and human sociality in social media research. We question the extent to which connecting with others can be considered an ‘addiction’ and discuss issues concerning the validation of new ‘addiction’ measures without relevant medical constructs. Readers should approach our measure with a level of skepticism that should be afforded to current social media addiction measures.
link.springer.com
October 1, 2025 at 11:33 AM
Reposted by Jake Browning
Imagine an apple 🍎. Is your mental image more like a picture or more like a thought? In a new preprint led by Morgan McCarty—our lab's wonderful RA—we develop a new approach to this old cognitive science question and find that LLMs excel at tasks thought to be solvable only via visual imagery. 🧵
Artificial Phantasia: Evidence for Propositional Reasoning-Based Mental Imagery in Large Language Models
This study offers a novel approach for benchmarking complex cognitive behavior in artificial systems. Almost universally, Large Language Models (LLMs) perform best on tasks which may be included in th...
arxiv.org
October 1, 2025 at 1:27 AM
Is there a book review of the recent Laurence and Margolis "Building Blocks of Thought"? Follow up: is anybody publishing Phil of mind reviews these days? I don't recall seeing any for Buckner, Burge or Shea, either.
September 27, 2025 at 12:39 PM
Reposted by Jake Browning
ChatGPT is surprisingly bad at generating / explaining garden path sentences, and my students, who had a garden path question on their homework, will soon find that out 😅
September 24, 2025 at 9:52 PM
Reposted by Jake Browning
Neurocognitive Foundations of Mind is out. Check it out.
September 19, 2025 at 12:56 PM
Dear universe,

I've got great karma as a reviewer. I'd appreciate it if you'd reward me in this life.

Thanks.
September 18, 2025 at 9:22 PM
Reposted by Jake Browning
I wrote a response to Thomas Friedman's "magical thinking" on AI here: aiguide.substack.com/p/magical-th...
September 15, 2025 at 4:27 PM
Reposted by Jake Browning
Our entry (with @ericman.bsky.social ) for the Open Encyclopedia of Cognitive Science, “The Language of Thought Hypothesis”, is now out.

doi.org/10.21428/e27...
The Language of Thought Hypothesis
doi.org
September 12, 2025 at 7:06 PM
Reposted by Jake Browning
the results so far on congestion pricing in new york have been so outrageously good that opposition to it works as a convenient identifier of unserious buffoons www.reuters.com/world/us/new...
September 9, 2025 at 6:33 PM
Reposted by Jake Browning
Just reading the Friedman AI articles in the NYT. There is *a lot* of magical thinking in them. E.g.:

"We discovered in the early 2020s that if you built a neural network big enough, combined it with strong enough A.I. software and enough electricity, A.I. would just emerge." (1/2)
September 8, 2025 at 11:53 PM
Reposted by Jake Browning
The real philosophy journal scandal goes back to 1921 when The Journal of Philosophy, Psychology, and Scientific Methods shortened its name to The Journal of Philosophy so it was "more convenient for citation," which of course made it easier for Mind to drop psychology from its subtitle in 1974.
September 8, 2025 at 8:13 PM
Reposted by Jake Browning
They’re even censoring Bansky’s image of a protestor! ⁦‪‬⁩
www.theguardian.com/artanddesign...
Court staff cover up Banksy image of judge beating a protester
Artist’s latest work at Royal Courts of Justice in London is thought to refer to pro-Palestine demonstrations
www.theguardian.com
September 8, 2025 at 2:47 PM
Reposted by Jake Browning
Our new lab for Human & Machine Intelligence is officially open at Princeton University!

Consider applying for a PhD or Postdoc position, either through Computer Science or Psychology. You can register interest on our new website lake-lab.github.io (1/2)
September 8, 2025 at 1:59 PM