dalkuin.bsky.social
@dalkuin.bsky.social
Also, I'm sure you have the chill for Stardew, but not on stream
January 9, 2026 at 8:32 PM
Uhh, as much as I love Factorio, that's not safe for baby.
January 9, 2026 at 8:25 PM
To be fair, it wouldn't be a famous puzzle if the average person could see that either.
January 9, 2026 at 7:25 PM
But failed to realise the paradox was because it didn't understand the difference betweeen the chance of it happening vs the chance that it happened.
January 9, 2026 at 7:07 PM
Claude certainly understood the paradox
January 9, 2026 at 7:05 PM
For the record, the Sleeping Beauty Problem proves there's a difference between the probability of an event happening and the probablility of an event having happened.
January 9, 2026 at 6:57 PM
I'd be fascinated what Claude came up with.
January 9, 2026 at 6:53 PM
Pretty sure I can burn down Sleeping Beauty easy, not sure I can do it in a haiku.
January 9, 2026 at 6:38 PM
Mmm, this was fun, next time let's pull apart the Sleeping Beauty Problem.
January 9, 2026 at 6:33 PM
At what point in this discussion do we start talking about Dune and the Gom Jabbar to detect humans.
January 9, 2026 at 5:40 PM
It's a good answer, the point of the hypothetical though is, does it count as them answering if they don't comprehend the answer they gave.
It was a better hypothetical pre-LLMs, answers from LLMs very much complicate the question.
January 9, 2026 at 5:34 PM
I blame Kotaro Uchikoshi, his games always had fun ways of describing these kinds of thought experiments.
January 9, 2026 at 5:06 PM
The short answer is, if you gave a person who didn't speak chinese questions in chinese, and a room full of books that told you what the response should be, and a way to find the right book, is that person actually answering the question.
January 9, 2026 at 5:02 PM
For the record, it's less that I disagree with my old opinion, more that I think we're unaware of how often our day to day thinking is more like the Chinese Room than we'd like.
January 9, 2026 at 4:49 PM
Personally, I will admit to being someone who originally referenced the Chinese Room back when LLM's first became popular.
But that some people swung this hard against is despicable.
January 9, 2026 at 4:44 PM
It's been a revelation the last 5 years how resilient western economies can be under the hood.
On the flip, I do recommend Americans remember what happened when Truss was in charge over here.
Once the trust's gone, it can go bad fast. And it's the trust more than the policy.
January 9, 2026 at 3:55 PM
A word to all Labour MPs to not legitimise Reform at all wherever possible seems to be in order.
January 9, 2026 at 3:28 PM
Add to that that if there's an AI bubble pop, there's no guarantee they'll learn the right lessons.
January 5, 2026 at 2:40 AM
Training LLM's on social media was a mistake. Training them on 4chan was a double mistake
January 5, 2026 at 1:58 AM
Ironically, I take the opposite position, every dev / engineer should play it at least once, to internalise the need to build scalable designs.
January 5, 2026 at 1:27 AM
This does highlight the core problem. It's not magic, but I think we weren't as aware as we should be how vulnerable most people's psyche are.
LLM psychosis is only billionaire psychosis made available for the masses.
January 5, 2026 at 12:43 AM
I will co-sign that, it's not as good as Obra Dinn, but will scratch that itch.
For Seance at Blake Manor, I would recommend a notepad, the game's very good at storing info for you, but there are one or two places where that info can get lost in the midst of everything else it's storing.
January 4, 2026 at 11:10 PM
Years from now?
I think they'll be too heavy handed with the tools and most people will clock it, leading to the average user avoiding the public internet as spam.
The Threads comment was because a lot of people never went there because they thought it was full of corporate advertising.
December 28, 2025 at 11:31 PM
cf. Threads
December 28, 2025 at 10:50 PM