Vincent Carchidi
vcarchidi.bsky.social
Vincent Carchidi
@vcarchidi.bsky.social
Defense analyst. Tech policy. Have a double life in CogSci/Philosophy of Mind. (It's confusing. Just go with it.)

https://philpeople.org/profiles/vincent-carchidi

All opinions entirely my own.
Pinned
Sharing a new preprint on AI and philosophy of mind/cogsci.

I make the case that human beings exhibit a species-specific form of intellectual freedom, expressible through natural language, and this is likely an unreachable threshold for computational systems.

philpapers.org/rec/CARCBC
Vincent Carchidi, Computational Brain, Creative Mind: Intellectual Freedom as the Upper Limit on Artificial Intelligence - PhilPapers
Some generative linguists have long maintained that human beings exhibit a species-specific form of intellectual freedom expressible through natural language. With roots in Descartes’ effort to distin...
philpapers.org
Appreciate the general idea in this thread, but I think this answers the original question about why some people can't/don't use them appropriately: the thrust of the current agent craze is that the models are...agents. They shouldn't need handholding. Why learn when you can automate?
ed3d.net Ed @ed3d.net · 12h
What's frustrating (and @golikehellmachine.com has ranted about this to me before) is that Anthropic etc. seem largely uninterested in teaching their users this stuff. The theory seems to be "we'll just make it smarter so you don't have to know how to do that".

It's uh. Not working so well!
November 13, 2025 at 6:31 PM
I accepted some time ago that pretty much anyone I've been influenced by intellectually who was born before a certain year will have made...questionable decisions.
Jeffrey Epstein was developing a series, moderated by @lkrauss1.bsky.social, to bring scientists and celebrities together. The first season would include an episode where "Woody Allen talks about the human condition with Linguist Noam Chomsky."

drive.google.com/file/d/14Sla...
HOUSE_OVERSIGHT_023123.txt
drive.google.com
November 13, 2025 at 1:20 AM
There's also the intellectual autonomy issue here, which I repeatedly bring up because so long as LLMs' quality of output are dependent on the competencies of the person prompting them, it's difficult to say the LLM has mastered the skills necessary to produce the output in any sense relevant to us.
LLMs' essays are almost always impressive in the "it's amazing what they can produce" sense, rather than the "making conceptual progress" sense.

This can quickly turn into a can of worms that I don't want to open, because many would likely say they do this already (or that humans don't do this).
November 12, 2025 at 4:02 PM
I think it's true the paradox is less useful/clean cut than it used to be.

I've been thinking recently that it also started getting misapplied in the 2023-present period.

(gyges may not agree with this part, just my hot take)
i honestly think moravec's paradox has broken. like the boundary between easy and hard is this fucked up fractal thing and we're sort of just sitting on it. we can do some things and not others and no simple heuristic tells you which ones
it would be cool to have humanoid robotics. we are still trying to deal with moravec's paradox in robotics
November 12, 2025 at 3:58 PM
Reminded of some commentary from the pre-ChatGPT era because of the LeCun news.

AI Winters are seen as being in the rearview mirror, deep learning might be capturing general principles of intelligence, but it's useful enough either way to put those concerns aside.

ojs.aaai.org/aimagazine/i...
The Third AI Summer: AAAI Robert S. Engelmore Memorial Lecture | AI Magazine
ojs.aaai.org
November 11, 2025 at 6:07 PM
Only just got around to this, which turned out to be interesting in a different way than I expected. It's not a defense of a specific AI nativist research program so much as a defense that there *should be* an AI nativist program comparable to ML empiricism.

philarchive.org/rec/KARAET-4
Brett Karlan, AI empiricism: the only game in town? - PhilArchive
I offer an epistemic argument against the dominance of empiricism and empiricist-inspired methods in contemporary machine learning (ML) research. I first establish, as many ML researchers and philosop...
philarchive.org
November 11, 2025 at 1:32 AM
This is an interesting read, and seems like a well-motivated project.

"Small open high quality sources are increasingly more valuable than large data collections of questionable provenance."
Breaking: we release a fully synthetic generalist dataset for pretraining, SYNTH and two new SOTA reasoning models exclusively trained on it. Despite having seen only 200 billion tokens, Baguettotron is currently best-in-class in its size range. pleias.fr/blog/blogsyn...
November 10, 2025 at 10:02 PM
One of the points that's been running through everything I've written on tech policy over the past few years is: if you want to take full advantage of LLMs, the best thing to do is calm down about them. Focus on what needs to be possible relative to what is possible. Allocate resources to that end.
I think you're exactly right that much of this is fear-based, which leads to all sorts of ridiculous denials/covers.

That said, I really wouldn't discount the general feeling of "why the hell do I have to pretend like LLMs are everything?"
November 9, 2025 at 4:20 PM
Reposted by Vincent Carchidi
A good government builds and funds great infrastructure to allow private businesses to thrive. This was the entire point of Obama’s “you didn’t build that” speech. The private sector forgets and/or denies this at their peril.
So it turns out... the US air travel system was incredibly, deeply dependent on federal funding to just run day-to-day all this time, to the benefit of private airline shareholders, when everyone thinks that state-run trains are leeching off the government. Weird!
November 9, 2025 at 3:04 PM
It should be clear that I find transformers incredible, and I'm interested in the tech for all sorts of reasons, and think finding productive uses is valuable.

But I worry about the amount of people who swear by their reliance on it for self-enhancement, apparently unaware of its impacts on them.
Recently met someone who was very surprised to hear I didn’t use ChatGPT or any LLM.

I said my job depended on doing distinctive work. That was my selling point. If I started to sound like ChatGPT and turn out what it did, then how on earth could I justify doing it? What would that make me?
November 8, 2025 at 4:01 PM
Reposted by Vincent Carchidi
Words to live by tbh
November 7, 2025 at 4:26 PM
Okay, so this matches my experience in finding it very obvious when GenAI has been used without disclosure to produce written work, of any kind.

But it does raise some interesting questions about why it seems like models are never actually "saying" anything.
like, I'm going to pop three examples here, two written by people and one AI slop, and I will bet everyone can instantly tell which is which.
November 7, 2025 at 3:59 PM
Recruiters will scan your face before reading mandatory cover letters
November 6, 2025 at 5:32 PM
Reposted by Vincent Carchidi
In the long-running Nativism-Empiricism debate, have the impressive successes of AI based on blank slate-ish connectionist architectures dealt a knock-out blow for Empiricism? Is it game over for Nativism? @oldjerryfodor.bsky.social gently pushes back 🧪 philpapers.org/archive/KARA...
November 5, 2025 at 9:18 PM
I'm not someone who loves networking, but since it's so important for some fields, I wholeheartedly recommend against using ChatGPT to write your networking emails...

A student sent me this. Verified edu email. Personal info blocked obviously.
November 5, 2025 at 3:06 PM
People dont understand that it's time for pure understanding concretized in software
November 4, 2025 at 7:24 PM
Negative polarization is really messing with people. You wanna talk about bubble risks, I'm with you, but this article reads like propaganda.
November 4, 2025 at 5:08 PM
My polling place here in Philly was unusually crowded today for an off year election, FWIW
November 4, 2025 at 2:11 PM
🚨The article has switched from "Reviewers Assigned" to "Under Review"🚨

It took 7 months but it's happening.
November 3, 2025 at 4:44 PM
One addendum to this is that changing our definitions of "intelligence" in that analytic context is a *good thing,* provided that our goal is to understand what makes our/animals' behaviors possible. Shifting the goalposts is a sign that we (believe) our conception of those capacities is clearer!
There is no a priori "intelligence" out there in the world waiting to be discovered. There are capacities that enable behaviors. Different species share some and hold others exclusively. How you think about those capacities is a careful intellectual activity, but not determined by external reality.
November 2, 2025 at 9:41 PM
Yes. And you see a lot of arguments like this about AI - if we already have "general intelligence" through SOTA LLMs, then general intelligence ain't the holy grail we thought it was.

I'm sure there's a converse for the antis but I don't feel like both sides-ing rn.
however, this deflates intelligence conceptually and a lot of fields — including AI — will never allow that
November 2, 2025 at 9:05 PM