Benjamin Rahn
banner
brahn.bsky.social
Benjamin Rahn
@brahn.bsky.social
Dad. Product engineer. Ex-Stripe.
Co-founded ActBlue. Erstwhile physicist.
Reposted by Benjamin Rahn
Marjorie Taylor Greene is not suddenly “reasonable,” “moderate” or “brave.”

She’s just a fast rat fleeing a sinking ship, fighting for her political life while seeking a free pass for horrific past behavior and continued racist, antisemitic, anti-Muslim, antigay and anti-immigrant beliefs.

Say no.
A reminder not to celebrate Marjorie Taylor Greene even if she’s damaging Trump. Let them fight, and let them both burn. Save your praise for people who deserve it.
Marjorie Taylor Greene to Dana Bash: "You should have Nick Fuentes on your show"
November 16, 2025 at 7:11 PM
I’m assuming that’s part of the October 2026 strategy
is he going to send out fucking checks with his signature on again
November 10, 2025 at 8:28 AM
November 9, 2025 at 5:53 AM
Reposted by Benjamin Rahn
repeat that last line to yourself. this is enormously important.

"no longer would politics would be something that is done to us. now it would be something that WE DO"

ACTION. that WE DO. this is all real; this is all possible; this is all essential. let's DO THIS.
November 5, 2025 at 4:31 AM
In case it's helpful: keenrogue.com seems to be available
keenrogue.com
November 3, 2025 at 10:32 PM
The in-our-face failure modes -- e.g. relying on current LLMs to produce reliable legal citations, yikes! -- arise from misunderstandings about what the tech does/doesn't do that you're very rightly pointing out.
October 28, 2025 at 7:49 PM
Also, more generally: there's lots of technology and process that boils down to "how do we get a sufficiently reliable result out of this mix of these less-than-sufficiently-reliable components".
October 28, 2025 at 7:49 PM
E.g. for coding I'm using my own code review, manual and automated tests, etc. For research, any reported facts/citations require manual confirmation.

if you haven't already seen it, @anildash.com has a nice writeup www.anildash.com/2025/10/17/t...
October 28, 2025 at 7:43 PM
Agree that LLMs should not be relied on for "reliably true output". That said, I've been pleased with results in both coding and research/writing by making exactly that assumption -- and using a process of verification that to create a reliable result out of less-than-fully reliable parts.
October 28, 2025 at 7:37 PM