simonwillison.net/2025/May/21/...
simonwillison.net/2025/May/21/...
share.descript.com/view/tGXfz0Q...
share.descript.com/view/tGXfz0Q...
Creating magic with LLMs' patience. Personal systems of record. Infinite software. Personal context engines. Emergent collective intelligence. Personal Tech. Swarms of gremlins. Sub-network metastasis.
Creating magic with LLMs' patience. Personal systems of record. Infinite software. Personal context engines. Emergent collective intelligence. Personal Tech. Swarms of gremlins. Sub-network metastasis.
That's why prompt injection is such a hard problem to solve.
That's why prompt injection is such a hard problem to solve.
Prompt injection. Master Control Program. Slopdev. LLMs as amplification algorithms. The limitations of chat-only UI. Coactive software. The context wars. The same origin policy as a human, not natural, law.
Prompt injection. Master Control Program. Slopdev. LLMs as amplification algorithms. The limitations of chat-only UI. Coactive software. The context wars. The same origin policy as a human, not natural, law.
We’ve seen this movie before: new integration tech, huge promise, completely bonkers security assumptions.
We already know how this movie ends.
We’ve seen this movie before: new integration tech, huge promise, completely bonkers security assumptions.
We already know how this movie ends.
But if each particle of software is distributed as an origin/app, then the friction of orchestration dominates the value of the software.
Infinite software in the same origin paradigm doesn't fix aggregation, and might even accelerate it.
But if each particle of software is distributed as an origin/app, then the friction of orchestration dominates the value of the software.
Infinite software in the same origin paradigm doesn't fix aggregation, and might even accelerate it.
This leads to hyper aggregation. That tendency is intrinsic to the same origin paradigm.
This leads to hyper aggregation. That tendency is intrinsic to the same origin paradigm.
LLMs turn any text into potentially executable instructions, exploding the attack surface of traditional security models.
LLMs turn any text into potentially executable instructions, exploding the attack surface of traditional security models.