Vincent Carchidi
vcarchidi.bsky.social
Vincent Carchidi
@vcarchidi.bsky.social
Defense analyst. Tech policy. Have a double life in CogSci/Philosophy of Mind. (It's confusing. Just go with it.)

https://philpeople.org/profiles/vincent-carchidi

All opinions entirely my own.
Good stuff here.

I saw something from Benedict Evans recently where he noticed that the term "AI slop" at some point shifted from its original meaning of "trash output" to "anything automated by an LLM."

It seems like having a barrage of even *better* outputs in areas like recruitment becomes slop
November 14, 2025 at 2:39 PM
Very disappointing. But it is sort of funny that Chomsky doesn't miss a chance to dunk on his critics (right after Epstein offers him the use of his NY apt? hard to parse)
November 13, 2025 at 2:15 PM
This also provides what I imagine is still a helpful taxonomy of Neuro-Symbolic AI, and probably relevant to claims that transformer-based LLMs are/are not Neuro-Symbolic.
November 11, 2025 at 6:07 PM
But all of these things probably look "right" if somebody doesn't pay too much attention.

The follow-up email almost certainly was generated as well, when I asked about that report I apparently forgot writing.
November 5, 2025 at 3:06 PM
I'm not someone who loves networking, but since it's so important for some fields, I wholeheartedly recommend against using ChatGPT to write your networking emails...

A student sent me this. Verified edu email. Personal info blocked obviously.
November 5, 2025 at 3:06 PM
People dont understand that it's time for pure understanding concretized in software
November 4, 2025 at 7:24 PM
This is pretty wild. Claude 3.5 Sonnet would round up a whole number 20% of the time (making it incorrect) if that number was closer to a prime.
October 26, 2025 at 1:22 PM
I do get the sense reading this that using AI to try and accelerate/produce pathbreaking research, like this person alludes to, is an admission of sorts that the people involved don't believe human institutions can be improved instead.
October 24, 2025 at 5:24 PM
I thought the name sounded familiar...Roose interviewed the founders back in June. This portion makes sense if you want more funding for your mission but still want investors to believe their money will still be worth anything after full automation.

www.nytimes.com/2025/06/11/t...
October 22, 2025 at 5:36 PM
First paragraph is striking, if unsurprising.

www.ft.com/content/7960...
October 19, 2025 at 5:20 PM
Vibes of Fukuyama
October 14, 2025 at 6:05 PM
Thinking about what I was writing in 2022, pre-ChatGPT. In retrospect, I don't think any of this is quite right, but all of it is at least pointing in the right direction.
October 6, 2025 at 5:26 PM
I think this is exactly right.
October 6, 2025 at 1:22 PM
75 years on, the wisdom of Turing's Imitation Game was to lower expectations in understanding (machine) intelligence, a move so wildly successful it convinced many people his test was the highest bar a machine ever had to clear.
October 2, 2025 at 4:49 PM
Two arguments from yours truly on the modularity of human language and the (un)suitability of LLMs as theories of language's cognitive basis. Link below.
October 2, 2025 at 2:15 PM
I.e., the "major shift" would be a shift from relying on the intelligence of the prompter to guide and shape outputs to the model being self-reliant; a kind of intellectual autonomy.

arxiv.org/abs/2308.03598
October 1, 2025 at 10:14 PM
I think this is quite right, and is consistent with the original definition(s) of AGI.
October 1, 2025 at 10:14 PM
Yeah. Worth the read.

www.mcgill.ca/oss/article/...
September 30, 2025 at 11:33 PM
Can't quote the OP.

The whole thread about failed Zitron predictions is interesting, and sort of gets back to my earlier post on video generators: I do think, for a lot of people, technological advancement confers zero sense of improved well-being, and its promotion is interpreted as hostile.
September 29, 2025 at 8:17 PM
I have several sections there arguing that what we might call "autonomous" human behavior has far more to do with a theory of language (and morality) acquisition than is sometimes believed, often - unintentionally I believe - neglected in some uses of computational modeling. (6/8)
September 28, 2025 at 4:56 PM
Similarly, @wiringthebrain.bsky.social had this to say in Free Agents: (4/8)

www.google.com/books/editio...
September 28, 2025 at 4:56 PM
See, for example, this excerpt from Yiu et al. (2023): (3/8)

journals.sagepub.com/doi/10.1177/...
September 28, 2025 at 4:56 PM
Maybe this interaction is just too traumatizing for people
September 27, 2025 at 5:09 PM
Idk why the Sutton interview with Patel is triggering a bunch of comments like this among the AI people still on X. Not a new perspective.

And I mean, it might be true, but this does feel a tad...copey
September 27, 2025 at 5:04 PM
Reasonable take from Karpathy on the job market

x.com/karpathy/sta...
September 26, 2025 at 12:58 PM