Pwlot
banner
pwlot.bsky.social
Pwlot
@pwlot.bsky.social
Alien from the future on a mission to enhance, extend & explore existence.
https://www.pwlot.com
I'd focus on causal rather than biological. But that was always the point of the perhaps poorly named "biological naturalism" (Searle)
October 20, 2025 at 6:33 PM
So people will argue over these a lot. The task before us to disentangle, explore, build and taxonomize these phenomena and their features.

We piggyback on our kinship & make extremely superficial judgments. Due to that kinship we've gotten away with that thus far. That'll end.
October 19, 2025 at 2:30 PM
Again: mind, will, consciousness and intelligence have in common that hey all come in degrees. This is why we endlessly debate their definitions.
They are multi-component phenomena and may exist partially in various manners. They can be realized in multiple ways.
October 19, 2025 at 2:30 PM
The biggest problem ahead that so many practically fixed features, almost constants, i.e. features with little to no variation (through the lens of what's ultimately possible), like our IQ, memory, and such, will become variables with much bigger ranges than in humans.
October 19, 2025 at 2:30 PM
An non-sentient intelligence that is fed output of of a sentient intelligence, will have some weird and uncanny aspects to it, like we already see with LLMs.
October 19, 2025 at 2:30 PM
Without having consciousness it's not possible to infer it, and likely hard to impossible to appreciate sentience - therefore to care about it. Sentience is likely a prerequisite to care. There is of course much more to this - but this is it in a nutshell.
October 19, 2025 at 2:29 PM
The bare bones definition of consciousness: it's something like to have it. Intelligence and consciousness are orthogonal, they can exist separately from each other, but without consciousness features like self-awareness loops will be potentially weird simulacra.
October 19, 2025 at 2:29 PM
I say it's weird, because it results in this: Large language models like ChatGPT are trained on human output so they talk like they have cognitive features they don't actually have.

That doesn't mean that some features can't emerge in different forms.
October 19, 2025 at 2:29 PM
The weird part is that they're trained on output of us, intelligent and conscious beings. And consciousness is intelligence's wake-up call and obviously a feature of the universe, in that we're not in a p-zombie universe. It feels like something from the inside.
October 19, 2025 at 2:29 PM
So, in a way, it's *not* surprising LLMs as glorified auto-complete engines, display intelligence. They are intelligence. Prediction is intelligence. They're just limited to a domain. Add multi-modality, sensors, actuators, add few loops, and....
October 19, 2025 at 2:28 PM
However, if you think our wondrous yet limited 20W-fueled 86B-neuron brain with 1400 cm3 volume is the epitome of intelligence...you got another thing coming.
October 19, 2025 at 2:28 PM
With enough data and experience, the system should max out at an upper bound of tractability versus its environment. Ultimately, "full" generality doesn't exist a.k.a. isn't computable. It also isn't necessary.
October 19, 2025 at 2:28 PM
Oh, too bad. Cool stuff, though! Keep going. :)
September 28, 2025 at 6:45 PM
Exotic shortcuts like wormholes can poke holes in the above, but that remains to be seen. Existence is very probably necessarily lonely.
September 27, 2025 at 5:07 PM
There may be no other way. Complexity demands gradients, gradients demand weak coupling, which at scale becomes massive latency. A universe that makes observers also is a universe that hides most of itself from them.
September 27, 2025 at 5:07 PM