David
twothreemany.bsky.social
David
@twothreemany.bsky.social
Tech worker. Bay area native. Jew-ish. Recovering sectarian. U+03C2 Greek Small Letter Final Sigma male.
I don't know if this is a reasonable general rule -- getting tedium done seems like a worthwhile use case -- but it does seem like the absolute minimum for something made with AI to be considered art
December 28, 2025 at 7:47 PM
I think this explains the creepy anxiety or self-hatred people sometimes elicit from Gemini when it's failing at a task, for example
December 23, 2025 at 3:51 PM
People complain about anthropomorphizing, but it can actually be predictively useful. A model: the LLM is predicting, "conditional on having said X, per the words put in my mouth during training, what kind of persona am I simulating", and most of its prior for personas comes from human writing
December 23, 2025 at 3:50 PM
This would suck for a while but it's a problem that would solve itself eventually (the rich idiots would, on average, stop being rich). If AI keeps getting better and replacement actually works, it's the opposite - the problem gets harder and harder to fix as we (workers, humans) lose power
December 23, 2025 at 3:31 PM
I'm broadly much more worried about AI succeeding at things than failing at things!
December 22, 2025 at 11:51 PM
Which is my concern here, fundamentally: I think we need safety and redistributive policies for AI that we won't get, or that will be unhelpful, if they're made assuming we're mostly dealing with hype or a bubble that'll just pop on its own
December 22, 2025 at 11:50 PM
If you want to say that LLMs do something _like_ reasoning rather than doing true reasoning, fine. But I'd argue "copying your neighbor's paper", like "stochastic parrot", is much _worse_ analogy than "reasoning" if the goal is to inform us about what the model can & can't get done
December 22, 2025 at 11:45 PM
If you want to bake a commitment to truth into the definition of reasoning, I agree you can probably exclude LLMs, since the notion of an LLM having a commitment is problematic. This doesn't seem like a useful definition though if it doesn't say anything about what tasks they can accomplish
December 22, 2025 at 11:40 PM
Mostly we don't know mechanistically how they reason, but addition is actually an exception, at least if we can generalize from the toy models (which researchers seem to think we can)
December 22, 2025 at 11:34 PM
One assumes this is how the big models work too, but people study toy models because they're cheaper and easier
December 22, 2025 at 11:33 PM
I don't want to overstate (I appreciate this conversation has been more light than heat so far). Brains & LLMs have pretty different failure patterns, you definitely shouldn't use an LLM where a SAT solver (or calculator) would fit, & _current_ LLMs are not going to successfully replace programmers
December 22, 2025 at 3:26 PM
This doesn't strike me as a terrible description of getting biological brains - which are also fallible statistical things prone to typos, biases, and other errors - to do logical reasoning. At some level of complexity we even often externalize the "prompt" onto paper to give to our future selves
December 22, 2025 at 3:23 PM
Can't arithmetic is just not true. This isn't cherry picked, I asked it three problems and it got them all. I'm sure there's some failure point but like same is true of humans doing mental arithmetic
December 22, 2025 at 3:04 PM
I wasn't referring to Google, although I see the ambiguity
December 22, 2025 at 2:45 PM
It's pretty explicable why LLM training by default incentives them to make things up, and I don't think anyone denies that. I'm not sure why that would mean they're incapable of reasoning. Humans also lie and confabulate (for mostly different reasons)
December 22, 2025 at 1:55 AM
There are many humans who are worse at arithmetic than a state of the art LLM. They're many orders of magnitude less efficient than CPU arithmetic instructions but they can do it and we even know how: arxiv.org/abs/2406.03445
Pre-trained Large Language Models Use Fourier Features to Compute Addition
Pre-trained large language models (LLMs) exhibit impressive mathematical reasoning capabilities, yet how they compute basic arithmetic, such as addition, remains unclear. This paper shows that pre-tra...
arxiv.org
December 22, 2025 at 1:49 AM
Maybe I'm an idiot (I am not going to provide credentials here) but like here's the guy who created Redis antirez.com/news/157
Reflections on AI at the end of 2025 - <antirez>
antirez.com
December 22, 2025 at 12:40 AM
I think it's evident that they can reason by any definition that doesn't simply bake in the conclusion that only biological beings can do so. I've had a reasonably successful programming career and I've had LLMs find bugs I couldn't (or at least didn't) independently
December 22, 2025 at 12:38 AM
What, exactly, is the scam? The claim that they're reasoning? Or just that they're one release from AGI?
December 22, 2025 at 12:31 AM
Like it's very defensible to hate LLMs and want them gone! (Even if my take is more complicated.) But it's self delusion to think they're just going to go away when a bubble pops, and barely better to assume that any jobs safe now always will be
December 21, 2025 at 11:57 PM
They went from useful for nothing to very useful in specific contexts (programming, search, cheating on homework) in under 3 years. Maybe they'll stop here! But the arguments that they _have to_ (in an engineering not moral sense) that I've seen are all really bad
December 21, 2025 at 11:54 PM
Hesitation about baldfaced lying? She's done, doesn't have the right mentality for a comeback in Trump's America
December 15, 2025 at 3:15 AM
The worst thing about this is that it's a little bit tempting. Like damn there are days when I really really wish I could talk to my dad again... It's a delusion, and yet plausibly a contagious one
December 10, 2025 at 5:45 PM
It's like there's a belief that the job of mods is to determine who is a good person and who is a bad person and ban or allow accordingly
December 8, 2025 at 3:13 PM