sequoia.pub/blog/introdu...
sequoia.pub/blog/introdu...
The setup is simple: two agents, two tasks, two VMs, and one chat channel. They then evaluated whether the merged solution from both agents passes the requirements of both tasks.
The setup is simple: two agents, two tasks, two VMs, and one chat channel. They then evaluated whether the merged solution from both agents passes the requirements of both tasks.
People are really sleeping on this, completely different experience when you can build software and just say "Fuck it, lets fire up 5 agents, then compare their work output against each other"
It might still be more efficient/cheaper to rely on LLMs via APIs, but running LLMs locally offer unparalleled room for experimentation.
People are really sleeping on this, completely different experience when you can build software and just say "Fuck it, lets fire up 5 agents, then compare their work output against each other"
Does that mean those countries should probably also now add a internet/social media ban in the same way, for the same purpose?
Does that mean those countries should probably also now add a internet/social media ban in the same way, for the same purpose?
but no wages stub, no union card, no "on strike" sign, no pink slip.
there's a levitating businessman emoji 🕴️ but no picket sign.
but no wages stub, no union card, no "on strike" sign, no pink slip.
there's a levitating businessman emoji 🕴️ but no picket sign.
Jokes aside, I do still miss it...
Jokes aside, I do still miss it...
Did #openai decrease the rate limits recently? Must have missed it.
Did #openai decrease the rate limits recently? Must have missed it.
simonwillison.net/2026/Jan/27/...
simonwillison.net/2026/Jan/27/...
emsh.cat/good-taste/
emsh.cat/good-taste/
Compiling Bevy main with a 9970X:
Vanilla Rust Linker:
1586,23s user 45,46s system 2201% cpu 1:14,11 total
Mold linker:
1589,45s user 36,75s system 2216% cpu 1:13,38 total
Wild linker:
1572,43s user 36,35s system 2196% cpu 1:13,25 total
Compiling Bevy main with a 9970X:
Vanilla Rust Linker:
1586,23s user 45,46s system 2201% cpu 1:14,11 total
Mold linker:
1589,45s user 36,75s system 2216% cpu 1:13,38 total
Wild linker:
1572,43s user 36,35s system 2196% cpu 1:13,25 total
embedding-shapes.github.io/cursor-impli...
#ml #ai #llm #cursor #chatgpt #claude
Usually they keep asking/telling what to do next...
Usually they keep asking/telling what to do next...