Just proclaiming that "form is not meaning!!" doesn't make it true.
Just proclaiming that "form is not meaning!!" doesn't make it true.
If you can't admit that text, images, toolcalls, RL with verified rewards, and RLHF are "interacting with the world" then you are just redefining words to suit your whims.
I'm done with you... we'll see in the next years who's right. 🤷 Good luck.
If you can't admit that text, images, toolcalls, RL with verified rewards, and RLHF are "interacting with the world" then you are just redefining words to suit your whims.
I'm done with you... we'll see in the next years who's right. 🤷 Good luck.
Your initial goalpost: LLMs can't be conscious because they are not in an open system.
I pointed out they do get context from interacting with the world.
New goalpost: fine it's an open system but its not the PHYSICAL world.
You are not a serious person. 🙄
Your initial goalpost: LLMs can't be conscious because they are not in an open system.
I pointed out they do get context from interacting with the world.
New goalpost: fine it's an open system but its not the PHYSICAL world.
You are not a serious person. 🙄
You know an LLM can simulate a video game right? So an LLM is MORE complex than a video game AI.
You know an LLM can simulate a video game right? So an LLM is MORE complex than a video game AI.
The model gets context from the user and from verified rewards and from simulation environments. That sounds a lot like an organism interacting with the physical world.
The model gets context from the user and from verified rewards and from simulation environments. That sounds a lot like an organism interacting with the physical world.
And ALSO at inference it's directly interacting with the world via user and tools.
So how is that not an "open system" again??
And ALSO at inference it's directly interacting with the world via user and tools.
So how is that not an "open system" again??
And ALSO there is interaction with the world at inference time. Models use tools in a loop that provide them context about the world.
And ALSO there is interaction with the world at inference time. Models use tools in a loop that provide them context about the world.
Well guess what? LLMs interact with the environment too. Differently than humans, but interaction nonetheless. Both during training and inference.
Well guess what? LLMs interact with the environment too. Differently than humans, but interaction nonetheless. Both during training and inference.
I mean, that's a pretty useless definition. Sure, LLMs run on silicon not biology. That proves nothing about what they are capable of. Can an LLM think and solve problems like us? Obviously it can.
I mean, that's a pretty useless definition. Sure, LLMs run on silicon not biology. That proves nothing about what they are capable of. Can an LLM think and solve problems like us? Obviously it can.
Consciousness is an emergent property of a complex system made up from simple parts.
If you think LLMs are inherently more limited than brains you need to argue WHY.
Consciousness is an emergent property of a complex system made up from simple parts.
If you think LLMs are inherently more limited than brains you need to argue WHY.
I would LOVE to see your argument for why "LLMs cannot be conscious" follows from Bayes Therorem. Make sure to give your formal definition of consciousness.
I would LOVE to see your argument for why "LLMs cannot be conscious" follows from Bayes Therorem. Make sure to give your formal definition of consciousness.
What is "this" that is "mathematically impossible"?
There is no proof that LLMs cannot be conscious because there is no mathematical definition of consciousness.
What is "this" that is "mathematically impossible"?
There is no proof that LLMs cannot be conscious because there is no mathematical definition of consciousness.
But please provide your argument formally, I would be fascinated. you might even get a Nobel prize.
But please provide your argument formally, I would be fascinated. you might even get a Nobel prize.
Of course there are differences. But you have failed to explain why you think these are RELEVANT differences.
Why does it matter that we can calculate LLMs? We can simulate neurons too.
Of course there are differences. But you have failed to explain why you think these are RELEVANT differences.
Why does it matter that we can calculate LLMs? We can simulate neurons too.