platypii.bsky.social
@platypii.bsky.social
should we believe bender 2020 or bender 2025? one of them is lying to promote her book
November 1, 2025 at 5:26 PM
That's weird, just a few years ago you said you parroted it from someone else on twitter
November 1, 2025 at 5:14 PM
If you think LLMs don't reason, then either you haven't used them, or your definition of reasoning is so specific to humans that it is useless as a definition.

Just proclaiming that "form is not meaning!!" doesn't make it true.
October 21, 2025 at 9:27 PM
wait are you claiming you personally created AI? lolol
October 18, 2025 at 3:34 AM
Sorry that people have access to unlimited education and personal tutoring, even in the most remote locations, for free thanks to chatgpt. Good luck on your quest to stop it ✊
October 18, 2025 at 3:13 AM
"killed their son" oh please. have you actually read the claims? Chat gpt consistently tried to help the kid. The kid LIED about his motivations in order to get the answer he wanted. From reading the transcripts, chatgpt handled a difficult situation better than 99% of humans would.
October 15, 2025 at 11:34 PM
It literally does interact with the world!

If you can't admit that text, images, toolcalls, RL with verified rewards, and RLHF are "interacting with the world" then you are just redefining words to suit your whims.

I'm done with you... we'll see in the next years who's right. 🤷 Good luck.
October 10, 2025 at 9:46 PM
"the hardware would break pretty fast" this is nonsense. what are you even suggesting?
October 10, 2025 at 9:26 PM
Is it an open system or not?

Your initial goalpost: LLMs can't be conscious because they are not in an open system.

I pointed out they do get context from interacting with the world.

New goalpost: fine it's an open system but its not the PHYSICAL world.

You are not a serious person. 🙄
October 10, 2025 at 9:25 PM
Absurd claim. A non-linear system with trillion parameters, dozens of transformer layers, and dozens of attention heads interacting is not complex?

You know an LLM can simulate a video game right? So an LLM is MORE complex than a video game AI.
October 10, 2025 at 9:18 PM
Moving the goal posts again, eh?

The model gets context from the user and from verified rewards and from simulation environments. That sounds a lot like an organism interacting with the physical world.
October 10, 2025 at 8:51 PM
But LLMs are NOT a closed system. And you admitted it: reinforcement learning (part of training) interacts with the world (simulate environments and human feedback).

And ALSO at inference it's directly interacting with the world via user and tools.

So how is that not an "open system" again??
October 10, 2025 at 8:48 PM
Training includes post-training stages. Everything that happens in determining the weights is "training" phase. Including SFT and RL.

And ALSO there is interaction with the world at inference time. Models use tools in a loop that provide them context about the world.
October 9, 2025 at 6:40 PM
Ok, you said initial training, fine. But you're the one moving the goalposts. Our entire argument was about whether LLMs could be conscious. You argued that there was no feedback loop with the world. I pointed out that the post training phase does. So how do you reconcile that with your claims?
October 9, 2025 at 6:38 PM
False. Reinforcement Learning is a critical part of modern post-training for LLMs, and involves a feedback loop between the model and the world. Try again.
October 9, 2025 at 5:45 PM
I'll assume that you are arguing for a non-reductionist view of consciousness... that it depends on the interaction of the brain and the environment.

Well guess what? LLMs interact with the environment too. Differently than humans, but interaction nonetheless. Both during training and inference.
October 9, 2025 at 5:14 PM
1 trillion artificial neurons connected together non-linearly by trillions of weights is not complex to you? What??
October 9, 2025 at 4:58 PM
... where else would it emerge from?
October 9, 2025 at 4:55 PM
Oh, so your definition of consciousness requires that its running on a squishy brain?

I mean, that's a pretty useless definition. Sure, LLMs run on silicon not biology. That proves nothing about what they are capable of. Can an LLM think and solve problems like us? Obviously it can.
October 9, 2025 at 4:55 PM
Neurons are neither deep nor complex, why would you believe consciousness can emerge in the brain?

Consciousness is an emergent property of a complex system made up from simple parts.

If you think LLMs are inherently more limited than brains you need to argue WHY.
October 9, 2025 at 4:44 PM
I majored in math, use any formal logic system you like. First order, higher order, constructivist.

I would LOVE to see your argument for why "LLMs cannot be conscious" follows from Bayes Therorem. Make sure to give your formal definition of consciousness.
October 9, 2025 at 4:28 PM
> "This is mathematically impossible and therefor logical disproven."

What is "this" that is "mathematically impossible"?

There is no proof that LLMs cannot be conscious because there is no mathematical definition of consciousness.
October 9, 2025 at 4:14 PM
We don't have a criteria for consciousness, so now I'm certain you don't know what you're talking about.

But please provide your argument formally, I would be fascinated. you might even get a Nobel prize.
October 9, 2025 at 4:08 PM
You have no logical argument so you just say "go read". You're both condescending and wrong.

Of course there are differences. But you have failed to explain why you think these are RELEVANT differences.

Why does it matter that we can calculate LLMs? We can simulate neurons too.
October 9, 2025 at 4:01 PM