Previous: ~14 years working on Windows, with a stint as small startup CTO in the middle. I also made Tweetium (may it rest in peace).
https://brandonpaddock.substack.com
en.wikipedia.org/wiki/Automat...
Reasoning models exhibit behaviors well-described as reasoning. Which kinds, how effectively and reliably, etc, are areas of research.
en.wikipedia.org/wiki/Reasoni...
en.wikipedia.org/wiki/Automat...
Reasoning models exhibit behaviors well-described as reasoning. Which kinds, how effectively and reliably, etc, are areas of research.
en.wikipedia.org/wiki/Reasoni...
Formal usages vary somewhat by field - e.g., cognitive science, philosophy, computer science, etc.
Formal usages vary somewhat by field - e.g., cognitive science, philosophy, computer science, etc.
It’s a complicated subject and what people mean by “reasoning” can vary, and that’s okay. It doesn’t mean that any of these uses are misuses. It just means that people should be aware of the nuances and avoid blanket statements based on their own narrow interpretation or perspective.
It’s a complicated subject and what people mean by “reasoning” can vary, and that’s okay. It doesn’t mean that any of these uses are misuses. It just means that people should be aware of the nuances and avoid blanket statements based on their own narrow interpretation or perspective.
That the moon orbits the earth is a fact. That the moon orbits the earth is learned and memorized by LLMs during training. Therefore, some facts are memorized by the model during training.
I have no idea what you’re trying to accomplish here with your bizarre arguments.
That the moon orbits the earth is a fact. That the moon orbits the earth is learned and memorized by LLMs during training. Therefore, some facts are memorized by the model during training.
I have no idea what you’re trying to accomplish here with your bizarre arguments.
As I said, the model doesn’t know which things it has memorized are facts. That doesn’t change the fact that it has memorized them.
As I said, the model doesn’t know which things it has memorized are facts. That doesn’t change the fact that it has memorized them.
You, however, don’t seem to understand what replay attacks are. So you might want to take your own advice.
Giving you an auth token would not enable a replay attack.
You, however, don’t seem to understand what replay attacks are. So you might want to take your own advice.
Giving you an auth token would not enable a replay attack.
Many facts get encoded into the model weights, and not as verbatim statements from training data.
Many facts get encoded into the model weights, and not as verbatim statements from training data.
As I said, they do not contain a database of facts. Language models model language, not knowledge (though, today’s models are no longer strictly language models, that doesn’t really matter here).
But they absolutely do memorize facts during training.
As I said, they do not contain a database of facts. Language models model language, not knowledge (though, today’s models are no longer strictly language models, that doesn’t really matter here).
But they absolutely do memorize facts during training.
You clearly have some issues, I hope you get the help with that.
You clearly have some issues, I hope you get the help with that.
They also memorize some facts, but this is more of a bug than a feature to a large extent.
They also memorize some facts, but this is more of a bug than a feature to a large extent.
The whole point of trained self-attention is to let the model learn and generalize patterns over sequences during training, including reasoning patterns, and then to apply them at inference time.
The whole point of trained self-attention is to let the model learn and generalize patterns over sequences during training, including reasoning patterns, and then to apply them at inference time.