Gary Lupyan
banner
glupyan.bsky.social
Gary Lupyan
@glupyan.bsky.social
I didn't mean that it's easy in an absolute sense (or easy to 'solve' from kid amount of data), just that it doesn't seem that LLMs get a huge advantage from being trained on text vs. speech. @begus.bsky.social - any thoughts on this?
November 11, 2025 at 2:32 AM
Fair point!
November 10, 2025 at 11:38 PM
Yes, that's my understanding as well. A lot of applied work uses text just because it's so effective, but my understanding is that it's possible to train these sytems w/o text ai.meta.com/blog/textles... and that there is no in-principle roadblock
ai.meta.com
November 10, 2025 at 6:28 PM
Learning to tokenize is not hard (eg one can start w raw audio) but a bigger issue is that LLMs benefit from an already existing language (and it’s vocabulary) while we had to invent it. Of course babies born into a linguistic world have that same benefit :)
November 10, 2025 at 4:36 PM
Yeah I wouldn't argue that it's some ultimate benchmark. I suspect that what's behind the goal-shifting is not recognizing that the benchmark is flawed but fear in giving up yet one more domain of "only humans can do X"
November 8, 2025 at 11:23 PM
Of course the Turing test is also a social cognition test for ppl, and the difficulty of convincing other humans of your humanity seems pretty important too!
November 8, 2025 at 10:30 PM
So the fact that we now have machines that can do it (arxiv.org/abs/2503.23674) seems quite significant for thinking about "Ok, now that we can do this, where does it leave us w.r.t. modeling human-like intelligence". ....
Large Language Models Pass the Turing Test
We evaluated 4 systems (ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5) in two randomised, controlled, and pre-registered Turing tests on independent populations. Participants had 5 minute conversations s...
arxiv.org
November 8, 2025 at 10:29 PM
I think that Turing correctly intuited that to pass, machines would require instantiating an interesting and human-like type of intelligence - in contrast to many early AI folks who thought that if a machine could play great chess, everything else would follow. ...
November 8, 2025 at 10:27 PM
Would love to chat to you and Andy about it sometime!
November 8, 2025 at 9:25 PM
Interesting! I think of LLMs as serving as a critical existence proof of what for decades has been argued to be impossible. They're also by far the most domain-general system ever created. We've blown past the Turing test with a shrug and I'm on the opinion that this stuff really matters!
November 8, 2025 at 9:25 PM
One question I am asking myself as a reviewer is what hinges on the results coming out one way or another. If the answer is "nothing much" or "I can't even tell", that's a problem.
November 8, 2025 at 7:42 PM
Well placed and appropriate skepticism! but just consider what we’re talking about here! Emergent abilities by LLMs to detect extrinsic perturbations to their representations. Wild stuff!
November 6, 2025 at 9:29 PM
most discourse around AI is so so bad 😢 folks seem utterly unable to separate the politics (highly problematic!) from the social impacts (good reason to think net negative in the short term) from the science (utterly Transformative. Biggest thing in cogsci probably ever)
November 6, 2025 at 9:24 PM
These approaches to testing self awareness (which is indeed very different from sentience) are fascinating and the results suggest to me no in-principle limitations to self awareness in LLMs. transformer-circuits.pub/2025/introsp...
Emergent Introspective Awareness in Large Language Models
transformer-circuits.pub
November 6, 2025 at 9:20 PM
I agree. In the case of my university the only reason we can hire at all in psych is that there's a campus-wide AI initiative across depts/fields. It's super frustrating.
October 25, 2025 at 6:38 PM
I don’t think the goal is to hire “llm people” though. It’s to recognize that taking advantage of certain methods can help advance psych knowledge. I am not seeing an appetite in this hiring for people who try to understand LLMs. The focus is on people. but maybe I’m being overtly charitable.
October 25, 2025 at 3:29 PM
Is there something wrong with it? There is indeed a lot to be learned by using ai moleek organisms to advance theories on cognition. To me this just reads as connectionism at scale. Now that the models can do so much, we can do Mech interp. work that clarified the space of possible solutions etc
October 25, 2025 at 3:22 PM