fig (aka:[phil])
banner
bad-example.com
fig (aka:[phil])
@bad-example.com
art and transistors, they/them plant-mom building community infra

🌌 constellation.microcosm.blue
🚒 relay.fire.hose.cam jetstream.fire.hose.cam
🛸 UFOs.microcosm.blue
🎇 spacedust.microcosm.blue
📚 plc.wtf
🛰️ slingshot.microcosm.blue
Pinned
…but for real, you can help keep this alive if you want:

github.com/sponsors/uni...

ko-fi.com/bad_example
Reposted by fig (aka:[phil])
hey we're really working on permissioned data! read the first in a series of posts i'll be doing about our design decisions along the way. this one is about our decision to not do an e2ee system
Permissioned Data Diary 1: To Encrypt or Not to Encrypt
The first in a series of posts about major design decisions along the way to a permissioned data protocol for atproto.
dholms.leaflet.pub
February 11, 2026 at 4:15 PM
averaged a bit over 5 seconds per second (this is the old (slow) write path but on a ~fast box)
February 11, 2026 at 9:25 PM
those are a bit more common. prod constellation is still on jetstream, but indigo/cmd/relay uses the same websocket server library (gorilla)
February 11, 2026 at 8:13 PM
errors from tungstenite are handled but panics are not (since they're definitely not expected) which leaves constellation with a dead consumer. it takes down the no-events-received reconnect timeout with it.

...the bit that might be related is the need for that no-events-received reconnect though
February 11, 2026 at 8:11 PM
feel you but it *might* have been something different here. there's a weird bug path in rust tungstenite around setting up the socket's blocking mode, which occurs very very rarely on reconnect.

zombie here is then due to a second-level constellation bug because/
February 11, 2026 at 8:09 PM
Reposted by fig (aka:[phil])
Service note: the primary constellation instance got stuck in a zombie websocket state around 1am EST, which caused it to stop indexing new content.

API calls are currently redirected to the secondary instance which is up-to-date.

(and the primary just finished catching up as I was writing this)
yeah sorry! i meant to fail-over and post an update but got distracted -- it is _nearly_ caught up already at this point though
February 11, 2026 at 4:39 PM
swapped the catching-up host to backup, calls should be up to date again
February 11, 2026 at 4:33 PM
yeah sorry! i meant to fail-over and post an update but got distracted -- it is _nearly_ caught up already at this point though
February 11, 2026 at 4:30 PM
shipping this sooooooon
🚰 hydrate constellation XRPC endpoints (of course!!)

here: one single request(!) fetching all @tangled.org issue *records* (actual records! not just links!) with the "good-first-issue" label, via constellation's many-to-many xrpc
February 11, 2026 at 12:14 AM
Reposted by fig (aka:[phil])
Open Collections are here 🌱
A new way to collaboratively curate content in the Atmosphere 🎨 
Read on to learn more about how they work 👇
February 10, 2026 at 9:06 PM
Reposted by fig (aka:[phil])
thinking through the upcoming many-to-many (non-aggregated) constellation query
February 10, 2026 at 9:09 PM
bsky relay gets an advantage because i believe PDSs will auto-re-request-crawl if activity happens after a long time (maybe with some awareness of whether the relay stated connected? not sure)

anyway here's two days of the connected client decay over the 4h reconnect cycle on both microcosm relays
February 10, 2026 at 5:40 PM
i'm not positive about it, but many hosts definitely drop off if i don't frequently drive the reconnects
February 10, 2026 at 5:35 PM
pushed a tiny fix for tranquil pds on debug.hose.cam

the debugger now shows `version` from the pds `describeServer` if present, falling back on the `_health` endpoint (for bsky pds impl) and no longer failing if the response at _health is not a JSON object
atproto PDS & account debugger
Quick diagnostics for PDS hosts, handles, relay connections, handles, DIDs, ...
debug.hose.cam
February 10, 2026 at 4:35 PM
i still need to catch up on what other operators like sri and dane have already been setting up for new tooling, there have been some good discussions save initial work already :)
February 10, 2026 at 2:03 PM
so, with that as context for the original q:

- the relay dashboard is pretty good for keeping an eye on things at a granular level
- lots of useful prometheus metrics are exported to make high level dashboards and alerting off of
- mary’s scraping is load-bearing for microcosm’s pds discovery
February 10, 2026 at 2:01 PM
bsky only updated their prod relays to sync1.1 like a week or two ago, and there’s still more on that sync iteration to do. all that to say personally i expect changes, but not fast, and it’s still a good moment if you have ideas to share them.
February 10, 2026 at 1:58 PM
i believe pds/relay discovery is a consciously underdeveloped part of the protocol, long term plan is to evolve it, but the specifics (and maybe even some of the direction?) aren’t set.

lots of good ideas have been discussed, many tradeoffs
February 10, 2026 at 1:54 PM
Reposted by fig (aka:[phil])
そろそろこっちでも晒しておくか。atproto.com和訳、感想募集中。 https://github.com/yamarten/atproto-website/pull/1
February 10, 2026 at 1:51 AM
yeah exactly
February 9, 2026 at 4:29 PM
(and i do not at all mean to trivialize that own-your-own-data is a huge and really really important thing in general)
February 9, 2026 at 4:26 PM
yep. and like, your content being community-bound is the *most* understandable thing imo because that’s how everything has always worked anyway.

i believe you can still do the other atproto tricks
- move your PDS around
- use an alt client
here
February 9, 2026 at 4:24 PM
i haven't gone deep into it yet but my assumption is that the public CID binds *some representation* of the private stored content. so when you get access to the private stuff, you can re-serialize + compute the same CID by following the same rules (may or may not be the same as normal record CID)
February 9, 2026 at 4:07 PM
yeah that's my outsider feeling as well. it has trade-offs but it seems pragmatic, thoughtful with counter-balances, and works (like, works today)
February 9, 2026 at 4:05 PM