Breck Emert
breckemert.bsky.social
Breck Emert
@breckemert.bsky.social
Qualified to give opinions
Academia but every time you publish non-preregistered results you have to watch Stranger Things season 5 again
December 14, 2025 at 9:39 AM
Dwarkesh: What's your opinion on negentropy?
Karpathy: so I was actually running some experiments maybe three years ago,
October 30, 2025 at 10:11 PM
Reposted by Breck Emert
It cannot be shamed.
It cannot be legally threatened.
It cannot be socially pressured.
It cannot be baited into fear or defiance.
It will not acknowledge liability.
And it will never stop doing what it was built to do, ever, until you are gone.
October 22, 2025 at 10:13 PM
"I don't want to overindex on this but—" Yeah buddy you overindexed. Now you look like you don't have a wide enough pool of prior information on this subject and we're laughing at you.
October 22, 2025 at 3:06 PM
You vs the candidate with 8+ YOE in ML willing to take $130-170k she told you not to worry about
October 20, 2025 at 9:36 PM
When TensorBoard first loads up, it selects every run, which makes some pretty artful graphs.
October 19, 2025 at 9:28 PM
October 19, 2025 at 6:12 AM
In terms of surprisal, most numbers he could say are equally likely to get quote-tweeted, just from different audiences. The prediction market for AGI sucks because of this - any claim can make it into news which makes *your* news an incredibly autocorrelated source.

js be careful out there
yeah, what can i say, guess i'm just a genius
October 17, 2025 at 7:36 PM
Reposted by Breck Emert
October 17, 2025 at 3:09 AM
Probability has always been tough but I'm still going to check out this Church language. Seems extremely intuitive.
This may just be the best CS paper I’ve read this year. Just read the abstract and first para of the intro! The rest of the intro is really wild too, but very very good:

dl.acm.org/doi/pdf/10.1...
October 15, 2025 at 5:16 AM
After each training batch we ask models to be more likely to have produced the training batch. The signals that survive *stochastic* gradient descent more often, are more generalizable ones. It is interesting to explore what generalizing means but uninteresting to not agree with this in 2025.
October 14, 2025 at 4:58 PM
I think the most helpful thing I ever did for my ML understanding was just spamming pytorch/tensorflow dimension questions with ChatGPT. I'd have it give me a transformer operation in code and I had to give before/after dims. Even just like 20 of those every couple weeks for a while is wonderful.
October 12, 2025 at 8:52 PM
Grid search is always a bigger task than I think it will be and I feel like I always wish I had planned for it harder.
October 11, 2025 at 10:36 PM
Reposted by Breck Emert
October 3, 2025 at 12:58 AM
Reposted by Breck Emert
September 12, 2025 at 6:49 AM
Felt cute might get speculated on the reality of how I work later
September 7, 2025 at 7:04 PM
I hope GPT6 supports one hundred bajillion context length so I can finally use one hundred bajillion tokens per prompt like I need to
August 12, 2025 at 9:23 PM
My mind's currently being blown thinking how much deeper that Uber founder is getting into undiscovered physics with GPT5 now out 🤯
August 10, 2025 at 12:23 AM
pylance: yeah I can autocomplete that import
pylance: I have no idea what you're importing
August 2, 2025 at 6:58 PM
The year is 2026, I have access to my paperclip maximizer, I ask it to eliminate cicadas.
July 31, 2025 at 12:07 AM
Reposted by Breck Emert
Oops I read my parrot a math textbook and now it keeps squawking out the answer to unseen math competitions
July 22, 2025 at 1:09 PM
LLMs can't play chess oh nooooooooooooooooooo
June 29, 2025 at 7:08 PM
This is an extremely good point. LLMs don't have to have human thinking skills to do well, they can match the system of language that has evolved as a close approximation.
youtube.com/shorts/jmhRs...
AI is only speeding up the alienating impact of language itself | Isabel Millar
YouTube video by The Institute of Art and Ideas
youtube.com
June 23, 2025 at 12:27 AM
Reposted by Breck Emert
A clumpy galaxy, possibly merging, observed with the Hubble Space Telescope in the COSMOS survey.

It is at redshift 1.17 (lookback time 8.58 billion years) with coordinates (150.09372, 1.83975).

42 volunteers classified this galaxy in Galaxy Zoo: Hubble.
June 21, 2025 at 6:26 AM