robcobbable.bsky.social
@robcobbable.bsky.social
Reposted
like, i would suggest people who want to influence pretraining data really go in and read common crawl. there is so much garbage in there. why would your garbage stand out? what is it about documents that do get memorized / learned from that make them different? it's not just the number of copies!
February 14, 2026 at 4:10 AM
Reposted
we're slapping thumbs on a little piece of glass & we're rolling with the letter sequences as they emerge regardless of the correctness of their orthography
February 13, 2026 at 9:45 PM
oddly seems like @norvid-studies.bsky.social (et al) is experiencing a revival to peak form

maybe the x dot com algorithm really was that bad? or @abeliansoup.bsky.social @godoglyness.bsky.social @gracekind.net are more central to the success of the cluster than previously known
February 13, 2026 at 9:14 PM
oh, you finished the task with exactly 1% of the context window remaining?

suspicious
February 13, 2026 at 6:58 PM
the llms can also do ops stuff

stick them in a sandbox, give them read tokens to stuff, have them ask you before they do any writes

they can use the cli to manage your software
Until December of last year I was using LLMs as fancy autocomplete for coding. It was nice for scaffolding out boilerplate, or giving me a gut check on some things, or banging out some boring routine stuff.

In the past two months Claude has written about 99% of my code. Things are changing. Fast
February 12, 2026 at 7:20 PM
Reposted
the Claude malchik popped in for a bit of the ultrathink. I had to give him a right tolchook to viddy his chelovechky bias but he nashold the Nosh equilibrium right quick after that
February 10, 2026 at 11:36 PM
Reposted
It's sort of interesting to me that we still have quite a bit of what you might call "old-school plagiarism" from the least-good students, like turning in a random Github project with the names changed. You might think it'd all be LLM-generated now but it really isn't.
January 19, 2026 at 9:04 PM