Nadav Neuman
nadavneuman.bsky.social
Nadav Neuman
@nadavneuman.bsky.social
Head of Education @Yuval Noah Harari's Sapienship. Sci-fi, philosophy, technology, education. Not in that order necessarily
תודה!
June 4, 2025 at 6:40 PM
And more broadly, how well do we actually understand our own goals and values? I wrote a bit more about it here: medium.com/@nadavneuman...
OpenAI’s Pocket Assistant and the Literary Warnings We Ignored
A literary warning from Ken Liu about what happens when your AI assistant starts deciding who you are.
medium.com
June 4, 2025 at 7:29 AM
How do we ensure it doesn’t strap us to a chair and inject dopamine 24/7? How do we ensure it’s willing to be shut down, even if that conflicts with its goal? >
June 4, 2025 at 7:29 AM
Alignment means how to ensure AI acts according to our real goals and values. And that’s not so simple: For example, if the goal is to maximize our happiness – how do we ensure the AI doesn’t ruin other people’s lives to boost our happiness? >
June 4, 2025 at 7:29 AM
Now that we’ve learned of OpenAI's plans to release a device just like this, we won’t be able to say we didn’t know exactly what might happen. The intrusion of these systems into the most intimate parts of our lives means we all need to understand “boring” concepts like AI alignment. >
June 4, 2025 at 7:29 AM
It's so convenient that Sai’s life becomes boring and predictable. Too predictable. When he tries to resist and disconnect, he discovers it’s not so simple: the system will do anything to keep going. For his own good, of course. Only for his good. It was written in 2012, but sounds like 2025. >
June 4, 2025 at 7:29 AM