Demiurg
banner
d3miurg.bsky.social
Demiurg
@d3miurg.bsky.social
Way too long to write here :) I've lived a couple of lives that have led me to become a CTO and a farmer on a quinta.
I'm here for the tech and science community and for kind people with interesting minds. Resist fascism!
I didn't believe it but it's real. USA is a failed state at this point.
November 12, 2025 at 8:40 PM
I think there are quite a lot of moving parts. So, the more secure way is to separate all user input per tenant.
November 11, 2025 at 12:27 AM
Exploits use flaws in the caching, so, if - over simplified - a threat actor can cache 'What is a x <injection>' and this gets a cache hit by 'What is a x' the user executes an injection without knowing it.
November 10, 2025 at 8:26 PM
If this key has a prompt injection you might be toast. I am just thinking loud. No idea if this can be achieved but usually when there is a way someone will find out.
November 10, 2025 at 8:11 PM
The only way another person would ever use the cache is if they used the exact same prompt.
-> that's the vector. Embeddings might not have a representation for every input. So, theoretically, i could create a key in your cache that gets triggered by the user message.
November 10, 2025 at 8:10 PM
Yeah, that's why it might be impossible to find the right poison. But if you could find a way to turn a common prompt into a prompt injection it could be a vector.
November 10, 2025 at 7:47 PM
Wouldn't it theoretically be possible that a threat actor 'poisons' your cache so that you execute potentially malicious prompts against your LLM instance? A lot of if's but I know that cache poisoning is a threat.
November 10, 2025 at 7:42 PM