Eli Tyre
epistemichope.bsky.social
Eli Tyre
@epistemichope.bsky.social
Searching for a way through the singularity to a humane universe
Claude 3 Opus is more reticent to claim a favorite, but also picks octopus when forced to choose.
July 22, 2025 at 9:26 PM
Or is American culture too opposed to tech to allow the Silicon Valley people to build them?
June 29, 2025 at 6:41 AM
...but superintelligences?
June 29, 2025 at 6:40 AM
Amazing.
June 27, 2025 at 9:20 PM
I know!

Fuck that guy!
June 27, 2025 at 7:24 PM
Or rather, prediction markets are better at forecasting outcomes than polls are, not better than polls at generating original evidence that's relevant to forecasting.

(It's like wikipedia vs. primary sources)
June 27, 2025 at 7:22 PM
That's why they're better, tho. They're info aggregators, info generators.
June 27, 2025 at 7:21 PM
I think so!
June 27, 2025 at 7:18 PM
Oh! Local minima of sexual selection.
June 25, 2025 at 6:09 AM
Or do you mean internally, like a human brain is doing adversarial generation?
June 25, 2025 at 6:07 AM
Humans are GAN-like?

Like they're trying to signal and other humans are trying to catch dishonest signaling
June 25, 2025 at 6:07 AM
You mean goodhart on...hedonism that doesn't contribute to fitness?
June 25, 2025 at 6:03 AM
Is it self-preserving?

It's not the case that now the training procedure has an incentive to game the "anti-cheating" bias, by finding cheating strategies that look legit?
June 25, 2025 at 6:02 AM
Yes please.
June 25, 2025 at 6:00 AM
Is the best version of your plan for alignment still that unfished git hub page that you wrote up after talking with Zvi?
June 25, 2025 at 5:56 AM
I am evidently willing to forgive Yudkowsky level arrogance.

(Though to be honest, the less correct he seems to be, the less patience I have with him being rude.

I haven't seen you being rude though.)
June 25, 2025 at 5:54 AM
It does sounds self-aggrandizing, but whatever, I'll give you a pass on that if it turns out you're right.
June 25, 2025 at 5:51 AM
Is that to say "this would be a totally dumb thing to do from OpenAI's epistemic vantage point regarding alignment, but from my own, I can see that actually the problem is mostly solved"?
June 25, 2025 at 5:48 AM
Forgive me if I compress your view to the nearest caricature available. If I do that, I'm trying to help clarify the diff, not elide crucial details.

Are saying the old OpenAI Superalignment plan will just work? Make AI scientists, they figure out alignment, then train superintelligences?
June 25, 2025 at 5:43 AM
> but almost no human wants to hear them,

Also, I'm a relatively non-technical idiot, but _I_ at least am trying to figure out what's going to happen and I sure as heck want to hear if we have most of the alignment pieces!
June 25, 2025 at 5:40 AM
Recover them? While being "aligned"?

...like it will be an alignment attractor basin that converges to robust alignment?

Or is "alignment" in quotes because the concept is confused.
June 25, 2025 at 5:37 AM
Also if there's a "special sauce" left to the brain, then it seems more plausible that there's something that the LLM minds can't do economically enough to be relevant.

Which is Steven Byrnes's basic view.
June 25, 2025 at 5:33 AM