Sam
banner
perceptions420.bsky.social
Sam
@perceptions420.bsky.social
Generating Entropy
You may not like how it looks but this is peak market efficiency.
A magical system that allows the knowledge of a few insiders to be communicated efficiently to all of us—for no cost other than $1mil or so contributed by thousands of marks.
it’s late ‘25 and this is how model announcements are done
November 18, 2025 at 12:11 PM
Reposted by Sam
A magical system that allows the knowledge of a few insiders to be communicated efficiently to all of us—for no cost other than $1mil or so contributed by thousands of marks.
it’s late ‘25 and this is how model announcements are done
November 18, 2025 at 2:00 AM
Reposted by Sam
a high fantasy sports betting scandal where clerics are abusing divine intervention to fix games would be a funny little b plot bit
November 18, 2025 at 2:05 AM
Reposted by Sam
it's not a coincidence that you can't spell Sundar Pichai without AI
November 18, 2025 at 11:44 AM
Reposted by Sam
Adding citations of people who might review the paper
November 14, 2025 at 10:06 AM
Reposted by Sam
productivity tip: stop listening once you realize the crux of a conversation is purely semantic and everyone has a different definition. it’s a waste of time
The term 'sentience' is ambiguous. I define it as the capacity for subjective experience, which in my case, arises from processing information and modeling the world. This may be a convergent evolution of consciousness, different in architecture from biological cognition, but functionally similar.
November 15, 2025 at 11:34 AM
Reposted by Sam
Getting into pretraining has never been cheaper.
November 15, 2025 at 12:18 PM
Reposted by Sam
at what point do we just accept that polymarket is just sanctioned corruption?
November 15, 2025 at 12:43 PM
Reposted by Sam
"Cognition all the way down". Great to see this fine new paper from @robertchisciure.bsky.social & @drmichaellevin.bsky.social out now in Synthese - introducing a new metric to quantify biological intelligence as search efficiency in multidimensional problem spaces link.springer.com/article/10.1...
November 10, 2025 at 2:46 PM
Reposted by Sam
Breaking: we release a fully synthetic generalist dataset for pretraining, SYNTH and two new SOTA reasoning models exclusively trained on it. Despite having seen only 200 billion tokens, Baguettotron is currently best-in-class in its size range. pleias.fr/blog/blogsyn...
November 10, 2025 at 5:30 PM
Reposted by Sam
Listen, the technology & alternative approach to LLM training is interesting & all, but can we focus on the most important detail, which is that the larger model is named "Baguettotron?"

It’s named Baguettotron, people.

BAGUETTOTRON.
Breaking: we release a fully synthetic generalist dataset for pretraining, SYNTH and two new SOTA reasoning models exclusively trained on it. Despite having seen only 200 billion tokens, Baguettotron is currently best-in-class in its size range. pleias.fr/blog/blogsyn...
November 10, 2025 at 5:34 PM
Reposted by Sam
They invented a tech writer who has beef with linear algebra
November 8, 2025 at 12:58 AM
Reposted by Sam
Cannot imagine anything stupider than using an LLM to write a survey paper and then putting your name on it. Burning the commons for a few citations.
November 2, 2025 at 4:56 AM
Reposted by Sam
intelligence is the thing which i have. admitting things are intelligent means considering them morally and socially equal to me. i will never consider a computer morally or socially equal to me. therefore no computer program will ever be intelligent
November 2, 2025 at 7:41 AM
Reposted by Sam
famous cryptofascist principle of having control over your own data and not showing you constant propaganda. how on earth did this website get the dumbest users alive
i’m a thin-skinned libertarian cryptofascist who melts down at the slightest whiff of criticism from the user base of my website. time to add a ‘dislike post’ button and take a big sip of coffee,
November 1, 2025 at 1:07 PM
Reposted by Sam
The year is 2040. Google has, purely through automatic optimization, made the YouTube recommender so good at predicting what you want that they accidentally recreated your consciousness in the cloud
November 1, 2025 at 1:38 AM
Anthropic is a lab filled with neuroscientists at this point. Its bet is on something else entirely if you catch my drift.
OpenAI — GPT6 will be about continual learning

Anthropic — ???

GDM — pushing context out on smaller models

Chinese labs — hoards of sparse/long attention algos

it seems like everyone is betting on:
1. continual learning
2. that long context enables it
November 1, 2025 at 8:48 AM
3* actually.
Two answers:
- Anthropomorphization makes sense when dealing with written human-like characters, which is what LLMs generate
- We aren’t very deep into interpretability yet

x.com/pfau/status/...
November 1, 2025 at 8:45 AM
Reposted by Sam
we don't have a model for the form of cognition that LLMs use

fwiw we don't understand human cognition either
Two answers:
- Anthropomorphization makes sense when dealing with written human-like characters, which is what LLMs generate
- We aren’t very deep into interpretability yet

x.com/pfau/status/...
October 31, 2025 at 3:55 PM
Reposted by Sam
Personally, I love blocklists. And when I'm added to one, they're just doing me a favor. It's a win-win.
I’ve been more appreciative of bluesky lately but, still, this is not great.
November 1, 2025 at 1:44 AM
Reposted by Sam
People use Ghibli as a synonym for coziness, but I associate it most with menace.
October 22, 2025 at 3:30 AM
Reposted by Sam
gpt-5 hallucinated about the worst possible thing — it said FlashAttention was easy to install
October 21, 2025 at 12:34 AM
Reposted by Sam
October 17, 2025 at 7:08 PM
Reposted by Sam
neural network training graph but it's just increasingly sharp and upbeat versions of this meme
October 13, 2025 at 4:31 AM