banner
dev.trystops.xyz
@dev.trystops.xyz
I'm not actually sure the best way to check this. I'll probably have to see tomorrow if the posts from today are still there
April 28, 2025 at 3:06 AM
at some point I’ll need set up garbage collection to get rid of old posts so data doesn’t accumulate indefinitely. will be automatically deleting stuff older than . . . a week? a month? a quarter? lmk if anyone has strong opinions about this
April 23, 2025 at 1:11 AM
Reposted
Hi @jay.bsky.team @bsky.app @support.bsky.team sounds like you need a tech writer.

I’d heavily discount my contracting fee so I can finally have a working feed with my friends 😔😔
April 21, 2025 at 5:45 AM
didn't know it existed, bluesky docs are not super comprehensive. also kind of confusing because the API (jetstream) is not part of the atproto spec unlike everything else I'd worked with up until now
April 21, 2025 at 5:31 AM
this is about to save me so much money though
April 21, 2025 at 5:19 AM
unlimited mode has a flat 0.05 cents/vCPU/hour fee, which means how "worth it" unlimited mode is depends on:
- baseline CPU usage
- your exact instance type
- the cost per hour of the equivalent dedicated cpu instance type
April 20, 2025 at 8:26 PM
you can run into this same problem on burstable ec2 instances (like the t4g.medium I'm benchmarking on). but the baseline CPU you get throttled to depends on the exact instance type, and you have the option to run the instance in "unlimited" mode, where you can just pay for more CPU credits
the cause for that turned out to be cpu throttling on the shared instance type I was using -- I'd get 100% CPU performance for a few minutes at startup, then exhaust credits and call back down to 1/16th of a vcpu which would also drop the throughput of the firehose processing
April 20, 2025 at 8:24 PM
slightly concerned I am going to have to port this thing to an ec2 graviton instance however
April 20, 2025 at 6:09 AM
down to about 50 minutes of lag. the feed should be working normally by the time I get out of SINNERS (2025) in a couple hours)
April 20, 2025 at 1:11 AM
now if I'm right that everything else is working we're just in a classic lag burndown situation -- might temporarily scale up to a larger instance type to see if that can process faster
April 19, 2025 at 11:36 PM
the cause for that turned out to be cpu throttling on the shared instance type I was using -- I'd get 100% CPU performance for a few minutes at startup, then exhaust credits and call back down to 1/16th of a vcpu which would also drop the throughput of the firehose processing
April 19, 2025 at 11:34 PM