ralphie hythloday, jr
banner
offutopic.bsky.social
ralphie hythloday, jr
@offutopic.bsky.social
He/him. Trying to write sometimes. Go lakers.

If everyone read Marx, Beauvoir, and Pettit, we’d be in a better world.
It’s exhausting how little people seem to know or care about government and the public sphere. And it is the responsibility of anyone who cares about democracy to fight tooth and nail against those systems of ignorance and apathy, those systems that balloon the private sphere and wither the public.
November 18, 2025 at 8:37 PM
that has fallen into moral bankruptcy. People are taught to “get ahead in life,” to chase careers and family life with no regard for good citizenship. It’s a testament to the efforts of organizers and movement leaders (on the right as well as the left) that there is anything left of public life.
November 18, 2025 at 8:37 PM
If you are a leftist you believe in systemic causes and systemic solutions. People are ignorant? Educate them. Voters can only fail when they have been failed—by a malformed schooling system, by a network of incentives that rewards proud ignorance over intellectual humility, and by a media system
November 18, 2025 at 8:37 PM
Like I said, I’m skeptical that we should attribute any sentience to current or near-future AI, but it’s enough of a black box that no one can speak with certainty. Which is itself part of why it doesn’t take a luddite or religious zealot to say that we should stop trying to put a mind into silicon
November 18, 2025 at 4:47 PM
That’s undoubtedly right, and really troubling! I think the only way to know is to look under the hood, and I’m not sure if that’s enough either. On some level this is a not a new problem (which animals, if any, are conscious? plants? religions have been taking swings at this question forever)
November 18, 2025 at 4:47 PM
I think it’s highly likely that, were we to produce a conscious machine, its patterns of thought would runaway and cascade into something unrecognizable. Which is all the more reason not to risk it—non-conscious statistical models look alive, but consciousness might not at all
November 18, 2025 at 1:10 PM
The jump from “this can produce text outputs that resemble human conversation” to “this has an internal state self-consciously reasoning through this conversation” is a logical leap that I don’t think is justifiable. It’s the mechanical turk problem.
November 18, 2025 at 1:10 PM
This is mixing up interiority with exterior outputs, though. Theres no reason to believe that cognition occurs within a purely statistical model. I believe we can eventually produce a self-aware interiority artificially, but I highly doubt that a language model alone can reach that.
November 18, 2025 at 1:10 PM
But when the latter catches up…
November 18, 2025 at 3:35 AM
I’m skeptical that this has already happened or will happen soon—i don’t think an LLM alone can achieve consciousness—but i entirely agree that we won’t know when it does. Our methods of mimicking consciousness are going to progress much faster than the development of consciousness itself.
November 18, 2025 at 3:35 AM
Oh you’re an all-female society led by Mothers/Matriarchs who use vaguely defined psychic powers for seduction (among other things) to ensure access to mates with favorable genetic traits?

But wokely?
November 16, 2025 at 7:59 PM