nateberkopec.bsky.social
@nateberkopec.bsky.social
Pinned
Everyone on X is telling me that actually there's tons of interesting AI conversations happening here on Bluesky. Guess I need to retrain my algorithm because I'm not seeing anything.
In 2026, we're going to see coding models either expand or get rebranded into "tool use" or "computer use" models.
January 30, 2026 at 5:04 PM
looks like a tiktok
I am less knee-jerk “won’t watch anything with AI” than this site tends to be but this objectively looks terrible on screen. Just really bad. No thanks.
First trailer for Darren Aranofsky's new AI animated series 'On This Day... 1776'

• Tells short narrative stories about the Revolutionary War

• Uses Gen AI tools, including tech made by Google DeepMind

• Has SAG voice actors
January 29, 2026 at 7:58 PM
LLMs do not meaningfully "refactor" at anything other than a junior engineering level. They can basically do some window dressing and move code around between files. True refactoring means creating new abstractions, which LLMs can't do because they can't form world-models.
January 29, 2026 at 4:57 PM
The biggest "stink" of LLM code in Ruby currently is how it writes loops like it's writing C. Set up lots of blank/empty variables, loop with each, update variables inside the loop.

Use Enumerable! Matz gave us this gift for a reason!
January 28, 2026 at 5:04 PM
Reposted
Getting tired of hearing "There are literally no use cases for LLMs" from people who think the only way to use an LLM is to type into chatgpt dot com, press enter, and hope for the best
January 27, 2026 at 8:39 PM
I think a major wave of self-retirement is coming. A lot of people were already disillusioned with tech industry post-vibe-shift, those same people are also most likely to dislike LLMs for various reasons.
Bro, I might just self-cull and wander off into the woods.
January 27, 2026 at 11:35 PM
The salary of the top 10% of designers and product people is going to 2x in 2026.
January 27, 2026 at 9:18 PM
Thank you to everyone who replied to this. Now I'm following a bunch of people talking about their alchemy MCPs, which, honestly, was exactly the kind of interesting I was looking for from this userbase
Everyone on X is telling me that actually there's tons of interesting AI conversations happening here on Bluesky. Guess I need to retrain my algorithm because I'm not seeing anything.
January 27, 2026 at 10:05 AM
Reposted
Tooling for coding agents is overly focused on scaling numerous parallel workers instead of ensuring correctness. That continues to be where all my time goes and is the real barrier to scaling up. (e.g., Why am I exploratory testing this UI when vision models could be doing it?)
January 27, 2026 at 1:26 AM
Tell me why I shouldn't preorder a Pebble watch + ring _right now_.
January 16, 2026 at 8:46 PM
Everyone on X is telling me that actually there's tons of interesting AI conversations happening here on Bluesky. Guess I need to retrain my algorithm because I'm not seeing anything.
January 16, 2026 at 8:25 PM
The greatest trick the devil ever pulled was to convince you that you needed a three-year reserved instance
January 15, 2026 at 4:57 PM
Ruby was born a hobbyist's language. Written for a human to enjoy, to write, to delight in.

Now, I get paid to write Markdown, not Ruby. I ride the LLM-Shoggoth. I don't read the Ruby it creates, Claude does.

Ruby will die a hobbyist's language: written by humans, for fun.
January 14, 2026 at 4:59 PM
Reposted
Ruby isn't dying, it is already dead. So is every other language. Rejoice, you have been liberated! You no longer write Ruby for The Man, but yourself! Reclaim the means of production as the means of amusement! Ruby was created to make you happy, not the machine. Wrest back your joy!
January 13, 2026 at 9:38 PM
Ruby isn't dying, it is already dead. So is every other language. Rejoice, you have been liberated! You no longer write Ruby for The Man, but yourself! Reclaim the means of production as the means of amusement! Ruby was created to make you happy, not the machine. Wrest back your joy!
January 13, 2026 at 9:38 PM
Literally everyone scaling on Postgres eventually hits SLRU cache issues. For that reason I am now saying PG17+ is a must-upgrade, as the changes around cache banking/SLRU configuration are your only real workarounds.
January 13, 2026 at 4:56 PM
Looking for a designer for a small project (40-80h) this Q1. Please reach out if you or someone you know might have some spare cycles!

Particularly interested in designers with a strong, unique (even weird/grating) visual style.
January 6, 2026 at 5:00 PM
I'm routinely asked about "automatic N+1 solving" libraries for Rails, which essentially try to manipulate `includes()` calls for you. I don't recommend them.

IME, missing `includes` is only ~10% of the actual N+1s I see in the wild. And, includes() isn't always optimal!
January 5, 2026 at 5:00 PM
Marco Roth on a generational run
I had fun prototyping a TUI for running and monitoring Procfile-based applications, similar to `foreman` or `overmind`, but built with the Charm Ruby libraries ✨
January 3, 2026 at 6:12 AM
As you're considering your charitable giving this year, here's the two organizations I've given to consistently for the last 3 years:

1. Nova Ukraine - humanitarian aid to the people of Ukraine
2. GiveWell - incredibly well-researched poverty/disease intervention
December 24, 2025 at 12:54 AM
Datadog had no right to have the best browser RUM around, and yet, they have it.

Every time I use the product I'm so happy with it. And then, it's integrated with everything else on the DD platform. Heaven.
November 28, 2025 at 5:00 PM
I never, EVER accept LLM output without making it run code to verify what it did (tests, something more manual, whatever).

Neither you nor your LLM can run code in their head. Do not trust, always verify.
November 27, 2025 at 4:59 PM
Letting Puma auto-set your worker count is the easiest way to go for 90% of usecases.

Currently, you can only do that with WEB_CONCURRENCY=auto, but we'll also make this possible in the next puma version by using `workers :auto` in your puma.rb.
November 26, 2025 at 5:02 PM
If you are using Concurrent.physical_processor_count or Concurrent.processor_count to set your Puma/Unicorn worker counts, that is wrong.

Use Concurrent.available_processor_count. It takes into account cpu quotas in envs like k8s/docker.
November 25, 2025 at 5:02 PM
Tired: Repo github stars
Wired: Repo contributor count
November 24, 2025 at 9:03 PM