Wes
banner
wesmank.bsky.social
Wes
@wesmank.bsky.social
Reposted by Wes
Despair is something Francesca Albanese rejects. Her courage is as tough as titanium steel: where is that kind of toughness forged?
December 20, 2025 at 9:32 PM
They have to be taking the piss surely. They can't honestly believe that they own a prompt?
December 21, 2025 at 5:34 AM
Reposted by Wes
It's a whole genre it's beautiful
December 21, 2025 at 12:26 AM
Reposted by Wes
The summary-shaped-but-not-a-summary aspect here is why we’re wading through endless misunderstanding.

The sequence output looks like a summary, reads like a summary, but it’s not a summary.

It’s more properly understood as a kind of NutriSweet: the form, the taste, but none of the substance.
December 21, 2025 at 5:13 AM
Reposted by Wes
Does RAG improve factuality and coherence by injecting relevant data prior to output generation?

It does.

But it’s still not a “summary” as such, because every output of every language model is only ever a probabilistically-generated sequence.

It’s, at core, “summary-shaped,” but not a summary.
December 21, 2025 at 5:05 AM
Reposted by Wes
Now, with “summary” of this type one would pipe in a document via a RAG process where, in essence, the model is fed the tokenized document to improve its understanding, the idea being the model isn’t just drawing, or even primarily drawing, from its training corpus.
December 21, 2025 at 5:05 AM