PUMO
banner
emojipan.bsky.social
PUMO
@emojipan.bsky.social
"Light can [...] be able to "work the system""
Reposted by PUMO
If you have a photographic memory and walk out of a movie theatre without a minor lobotomy forced on you, are you engaged in "intellectual property" piracy? Are you also violating people's "right to privacy" when you walk down a city street?

They're gonna say yes.
December 9, 2025 at 8:19 PM
Reposted by PUMO
The majority of human discourse seems to break down into “it would be bad if this were true so it’s not true” or “it would be good if this were true so it’s true” with a side of “you caused this bad thing to happen by telling me about it”.
October 14, 2025 at 3:20 PM
Because it's not about my beliefs, it's about yours. You could say "I'm a vitalist" or "Because X" and I would say "Understandable" and leave.

Technically you did say "because of all human knowledge" but that's way too ambiguous and purely rethorical.
August 4, 2025 at 11:52 PM
I'm not really interested in convincing you otherwise, I just wanted to know the Why.
August 4, 2025 at 11:30 PM
I just wanted to know your philosophical reason for believing Artificial General Intelligence is impossible, because you stated it with a lot of certainty and compared the reverse belief to a religion, which sounded like a very strong skepticism baked by a highly systematic justification.
August 4, 2025 at 11:30 PM
Physics is the closest thing we have to certainty, you are right it's not the same as certainty. But that means anything less than it is even weaker when it comes to claiming something is impossible.
August 4, 2025 at 11:11 PM
If your issue is that LLMs are burning the environment that really doesn't need some claim about the impossibility of AGI, it's just a different argument.
August 4, 2025 at 11:08 PM
You seem to see urgency is a reason why this doesn't matter, but you can see on the other side people worried about misaligned superintelligence capable of killing us all and coming very very soon are just as urgent, which makes sense given their beliefs.

So, what it's true does matter.
August 4, 2025 at 11:08 PM
Why? We need to prove what we *definitely* can't do, because not being able to currently do it isn't by itself convincing, since our capabilities change over time.
August 4, 2025 at 11:01 PM
Systems with growth, reproduction, and metabolism?

Information subject to natural selection?

Sustained local reversion of entropy?
August 4, 2025 at 10:59 PM
Just like saying we can't move faster than light no matter how good technology we get is a positive claim, that is extremely strongly backed by physics.
August 4, 2025 at 10:51 PM
About what living beings are and what technology is plausible? You are right there is a philosophy there: Physicalism.

If you say we can *never* create machines that are intelligent or sentient, that has strong implications about the universe, it's a *positive claim*.
August 4, 2025 at 10:51 PM
What should be the default assumption then?
August 4, 2025 at 10:44 PM
Because your claim is too strong!

Living beings are mechanistic (because everything is), so why what they do couldn't be recreated artificially?

Creating intelligent machines is plausible in principle because for all we know we are intelligent machines made of physical stuff.
August 4, 2025 at 10:39 PM
I mean the consequence of AI potentially being sentient, alive or highly intelligent. If it's categorically impossible, there should be some strong reason for that, but if there is not, the claim is at most a confident guess.

This is a separate issue from any harms AI as tools can do presently.
August 4, 2025 at 2:36 AM
I mainly ask because it's a strong claim, and somewhat counterintuive (assuming physicalism), but if you have a strong reason I'm curious (though, alive, sentient, and intelligent are in principle separable).

It's also something whose truth value is extremely consequential for harm reduction.
August 4, 2025 at 1:29 AM
Was lurking the discussion.
August 4, 2025 at 1:00 AM
Based on what?
August 3, 2025 at 11:01 PM