Brandon Paddock
brandonlive.com
Brandon Paddock
@brandonlive.com
Architect for AI in Word at Microsoft.

Previous: ~14 years working on Windows, with a stint as small startup CTO in the middle. I also made Tweetium (may it rest in peace).

https://brandonpaddock.substack.com
Within computer science, formal reasoning has been a subject of study for quite a while.
en.wikipedia.org/wiki/Automat...

Reasoning models exhibit behaviors well-described as reasoning. Which kinds, how effectively and reliably, etc, are areas of research.

en.wikipedia.org/wiki/Reasoni...
December 8, 2025 at 9:42 PM
Reasoning has long been attribute to human-made things in colloquial usage. A programmer may say that the compiler is reasoning over their code, and this is a fair usage of the word.

Formal usages vary somewhat by field - e.g., cognitive science, philosophy, computer science, etc.
December 8, 2025 at 9:38 PM
No it isn’t.

It’s a complicated subject and what people mean by “reasoning” can vary, and that’s okay. It doesn’t mean that any of these uses are misuses. It just means that people should be aware of the nuances and avoid blanket statements based on their own narrow interpretation or perspective.
December 8, 2025 at 9:31 PM
AI models aren’t “computer programs” in a traditional sense, and aren’t programmed by an individual. Indeed, they aren’t really programmed at all. They’re created and refined by learning algorithms.
December 8, 2025 at 8:38 PM
I need the @realadamrose.bsky.social version of this.
December 7, 2025 at 9:19 PM
IMO it would be very surprising if human learning didn’t incorporate statistical modeling and probabilistic pattern matching. It both intuitively makes sense and there’s empirical evidence pointing in that direction today.
December 6, 2025 at 8:33 PM
Being wrong, especially in ways that his audience doesn’t care about, just means there’s more reaction from the other side which only adds heat to the fire and causes more engagement + tribalism.
December 6, 2025 at 6:16 PM
The mechanism by which this is learned is through gradient descent + backpropagation which, through the training process, encodes into the weights the concepts of the moon, the earth, orbits, and the relationship between two objects where one orbits the other - plus connections between these.
December 6, 2025 at 6:49 AM
You are clearly confused.

That the moon orbits the earth is a fact. That the moon orbits the earth is learned and memorized by LLMs during training. Therefore, some facts are memorized by the model during training.

I have no idea what you’re trying to accomplish here with your bizarre arguments.
December 6, 2025 at 6:45 AM
No, I’m not. I am an expert who actually knows what I’m talking about, and you are misusing terms you don’t understand to attempt a nonsensical argument.

As I said, the model doesn’t know which things it has memorized are facts. That doesn’t change the fact that it has memorized them.
December 6, 2025 at 6:39 AM
I don’t need to, as I’m an expert on web security with a lot of experience and knowledge of this subject.

You, however, don’t seem to understand what replay attacks are. So you might want to take your own advice.

Giving you an auth token would not enable a replay attack.
December 6, 2025 at 6:27 AM
I really don’t understand why you’re so set on mansplaining my field of expertise to me. And it’s clear especially from the last part of your reply that you don’t know much about how these things work.

Many facts get encoded into the model weights, and not as verbatim statements from training data.
December 6, 2025 at 6:25 AM
You’re half right.

As I said, they do not contain a database of facts. Language models model language, not knowledge (though, today’s models are no longer strictly language models, that doesn’t really matter here).

But they absolutely do memorize facts during training.
December 6, 2025 at 6:01 AM
The answer is they are not “indifferent to the content”. Otherwise they obviously wouldn’t work at all.
December 6, 2025 at 3:05 AM
I’m confused what you are trying to accomplish here. I’m happy to help people understand how these models work and what they can and can’t do, but it’s not very productive when someone who doesn’t know the subject matter makes misleading statements as you have here.
December 6, 2025 at 3:01 AM
Yikes. I replied in good faith to help with your concern. I was civil and polite, and what I said is correct.

You clearly have some issues, I hope you get the help with that.
December 6, 2025 at 2:41 AM
lol what? That’s not how any of this works.
December 6, 2025 at 2:38 AM
That makes no sense, and no it is not “disingenuous” at all. It’s a simple fact.
December 6, 2025 at 2:36 AM
Gaslighting gets you instablocked.
December 6, 2025 at 2:04 AM
That… doesn’t make any sense.
December 6, 2025 at 2:04 AM
They memorize lots of facts, and lots of non-facts. Their ability to distinguish them is limited, but improving (though this is a super complicated challenge).
December 6, 2025 at 2:03 AM
These models do not themselves contain a “database of facts”, but no one here suggested they do. They can, however, interact with databases of facts (or any kind of data really).

They also memorize some facts, but this is more of a bug than a feature to a large extent.
December 6, 2025 at 12:23 AM
Not sure what you’re trying to say here. They very much do what I described.

The whole point of trained self-attention is to let the model learn and generalize patterns over sequences during training, including reasoning patterns, and then to apply them at inference time.
December 6, 2025 at 12:21 AM