#cached
Right, especially with our years of association with Sarah and her carefulness with supporting her statements in her work juxtaposed against Bluesky's rep & their song & dance over this debacle thus far.

Just FYI, though, a skeet with the words they claim is cached in Google, but it lacks context.
November 12, 2025 at 8:25 PM
Cached US Kindle giveaway, courtesy of @galatfemme.bsky.social: 10 copies of Dhalgren, by Samuel R. Delany, which I've read and while I've never done drugs, I'm fairly certain that would have helped. This is a *super weird* book.

#KindleBookGiveaway
November 12, 2025 at 7:53 PM
Regrettably paywalled, but I cached a copy on my blog (below), in a post on the new GOP hostility to weather prediction:

www.someweekendreading.blog/gop-war-on-w...
November 12, 2025 at 5:33 PM
Nah, it's just that they're in the giveaway folder, and occasionally come up in the random book picker. (I have 115 cached sets for giveaway, the oldest is The Salt-Black Tree by @lilithsaintcrow.com, but it's book 2 and I keep waiting for book 1 to go on sale.)
November 12, 2025 at 4:28 PM
How many cached books are we talking about anyway? If it's just a handful, maybe a private giveaway makes sense? It would let you clear out your cache without providing advertising.
November 12, 2025 at 4:17 PM
another still (and this is going to be a completely separate subproject) is that an off-the-shelf rdf quad store does not have any mechanism for expressing the modification time of a query result, meaning it can't be cached

senseatlas.net/d8554fc3-b7b...
Sense Atlas is particularly slow after POSTs because the RDF store only has a single global modification time (that I had to hack in).
2025-06-26T17:05:34Z
senseatlas.net
November 12, 2025 at 1:19 PM
Was the build in the screenshot the first deploy using git? If, so that could be the base img deps (git, node, etc) being installed. Those usually get cached for later builds. I just checked one of mine and (git based/fully static) and they take 5s because those base img deps are cached.
November 12, 2025 at 12:38 PM
LOL, Nate's age is cached because my defaults are very predictable
November 12, 2025 at 12:08 PM
Here is example of first DID which was deleted:
plc.directory/did:plc:iwaw...
plc.wtf/did:plc:iwaw...
(deleted on plc.directory, but still cached on Allegedly)
November 12, 2025 at 7:38 AM
A new video of a wobbly cube that's rendered via ray-marching a discretized SDF. Hypnotic!

Distances are reconstructed via trilinear interpolation of cached values on a grid.

This is the equivalent of what was shown in my earlier 2D animations (see below!)
November 11, 2025 at 11:17 PM
Cached US Kindle giveaway: 10 copies of M. R. Carey's INFINITY GATE (THE PANDOMINION BOOK 1), which was my favorite hard(ish) SF book the year it came out; the sequel, Echo of Worlds, is *better*.

#KindleBookGiveaway
November 11, 2025 at 8:34 PM
The longer I consider this, the more it makes sense, but the more I think we need to consider its impact.

If you search the same thing in AI Mode twice (assuming it is not cached), the searches Google runs in the background to make more trustworthy results may be different for each search.

🧵 4/7
www.linkedin.com
November 11, 2025 at 8:30 PM
Improved performance
Startup & runtime
• Blazor framework scripts fingerprinted, compressed, and cached
• Preload Blazor WebAssembly resources
• Faster runtime execution & rendering performance

##dotNETConf
November 11, 2025 at 7:25 PM
Dat zou zomaar kunnen… Chrome cached best agressief. Vooral htaccess
November 11, 2025 at 4:59 PM
THIS IS EXTREMELY COOL though I wonder if they computed once and then cached the UMAP embedding, or whether they're building it on the fly. If they cached it, you get the same embedding every time and you can get to know the geography. That could be fun.
Space DJ turns genre embeddings into a playable galaxy—pilot a ship, the music follows. 🚀

Key stats
768→128 PCA compression; 3D UMAP projection; three.js rendering; autopilot drift; high‑dim neighbors surfacing hidden similarities.
November 11, 2025 at 3:06 PM
Putting a URL into my browser and hitting enter should load the URL, not attempt to look for some cached Javascript reload-handler for that URL. I don't like Javascript mentality. Yes it makes "web apps" behave better but imho a document system like the web isn't the right place for responsive UIs!
November 11, 2025 at 2:41 PM
Tangent v0.11.0-alpha.4 is out! This update is a good collection of community-driven improvements and fixes across both old and new features.

Please continue to be squeaky wheels so that I may apply grease!

#pkm #foss #notes #markdown
November 11, 2025 at 2:33 PM
it’s fine. i’ve cached worse.
November 11, 2025 at 12:30 PM
and billed less potentially? When I was debugging Anthropic caching for chef.convex.dev it was enough to try to reason about our own responses, I'd have been surprised if the debugging interface I made showed something as cached and I didn't know why
Chef by Convex | Generate realtime full‑stack apps
Cook up something hot with Chef, the full-stack AI coding agent from Convex
chef.convex.dev
November 11, 2025 at 7:47 AM
Update: I was able to recover a lot of it from a cached webpage but I feel like I'm missing some stuff ughhhhh
November 11, 2025 at 7:43 AM
I can imagine doing this for predictability. You don't want another account's behavior making your prompts cached or not, LLMs are unpredictable enough without noisy neighbors!
November 11, 2025 at 7:41 AM
When you get a fast response you know it must have been cached, thus asked before.

It’s such a big search space, though, complicated by not knowing where the cache boundaries have been set. OpenAI go by blocks of 1024, Anthropic let you set where the cache boundary is yourself.
November 11, 2025 at 7:39 AM
Like the model doesn't need the original source text, just the cached KV pairs and your new queries to compute attention and generate tokens that can reveal the cached content.
November 11, 2025 at 7:34 AM
You probably know this better than I do, but I thought caching would store the KV pairs from attention, and if you can give those to a model with new input tokens, you could get what was previously cached by having the model attend over the cached representations with queries from your new prompt?
November 11, 2025 at 7:33 AM
The attack I’m thinking of is if you had the ability to invoke a model with another persons cache key.

Afaik the cached result is embedding values/kvs/etc, if you can start the model with that cache value and say “summarise what I just said” you should be able to read out what was ‘in the cache’?
November 11, 2025 at 7:20 AM