Bret Comnes
banner
bcomnes.fosstodon.org.ap.brid.gy
Bret Comnes
@bcomnes.fosstodon.org.ap.brid.gy
👨‍💻 Software Engineer • JS • TS • Go ⚡️

[bridged from https://fosstodon.org/@bcomnes on the fediverse by https://fed.brid.gy/ ]
October 21, 2025 at 3:08 AM
Anyone have old Mac minis they have sitting around and want to sell me?
October 21, 2025 at 2:58 AM
Noctua makes a slim 60mm fan now.
September 27, 2025 at 3:04 AM
Anyone know a good leader election library that either uses pg or redis on the backed? Basically, in a horizontally deployed service, I need one instance to do something unique, and something else to take over when it disappears.
September 24, 2025 at 10:15 PM
Remember keybase? I’m just getting spam on it now.
August 4, 2025 at 11:07 PM
Reposted by Bret Comnes
You can now view and edit your auth tokens in your account page. More auth token features like a CRUID ui and old token cleanup coming soon. Sorry for the slow pace of development lately, just trying to get core features implemented correctly.
July 28, 2025 at 5:16 AM
I ported the Tron Legacy theme to Zed editor
June 30, 2025 at 5:02 AM
Is there such a thing as a userQueryState hook? Basically use state but reactive in and out of the query string.
June 14, 2025 at 4:09 PM
Is there any good domain registrar left? (Independently run, well made, decent prices?) seems iwantmyname sold out recently.
May 25, 2025 at 7:02 PM
Looking for prior art: dissecting GitHub repos into sub-projects, specifically in monorepos.
May 15, 2025 at 4:07 AM
I finally posted my writeup on my Steam Machine project

https://bret.io/blog/2025/you-can-just-build-a-steam-machine/
You can just build a Steam Machine
I built a “Steam Machine” in 2024. I’ve run it for 8 months, and I’m really happy with it! I hope that after reading this, you’ll be encouraged to build one too! Why yes, that is a 65", HDR OLED Steam Deck in my living room. I’m running the latest Steam OS 3.7 (the OS running on the Steam Deck) on conventional AMD desktop hardware, running at equal or improved performance over Windows. We’re here! You can just build a Steam Machine now, and get a console-like experience running PC games in your living room, on Linux, with great controllers, on your 4K OLED TV (and surround sound if you have it), running in parity with your Steam Deck. ## The OS (SteamOS 3.7) The OS to install is SteamOS 3.7. It’s available in Valve’s “preview” update channel for the Steam Deck, but you can download the recovery image and try installing it on anything. There isn’t much more to say about this, other than yes, the rumors that Valve is expanding hardware support are true. It works on a lot more devices now, including conventional AMD desktop hardware. If you are interested in the details on installing not-quite-released software, read: * It’s time to install SteamOS 3.7 Prior to updating to SteamOS 3.7, I ran SteamFork, and before that, Chimera OS. SteamFork (the project) has thrown in the towel, and Chimera OS remains a great option. If SteamOS 3.7 isn’t quite working for your hardware yet, read about other similar projects that are helping bridge the compatibility gaps: * “Bazzite isn’t SteamOS (and that’s okay!)” ## The Hardware I targeted a budget of $1500 for the core hardware. Final cost came to $1799.71. Part lists go stale REALLY quickly, but if you want to build exactly what I have, here is the list. I wrote up the build on PCPartPicker too, but hey—feel free to use my referral links if you found this useful: * CPU: AMD Ryzen 7 7800X3D * Cooler: Noctua NH-L12S (Run this CPU fan in this case unless you have a good reason not to) * Motherboard: ASUS ROG Strix B650E-I * RAM: G.SKILL Flare X5 Series (AMD Expo) DDR5 32GB (Use RAM matching tools) * NVMe: WD_BLACK 2TB SN850X (Avoid low-end Samsung Gen5s) * GPU: GIGABYTE Radeon RX 7700 XT (Go better here if you increase on anything) * Case: Fractal Design Ridge * PSU: CORSAIR SF750 (Avoid bulky, cheaper SFX-L PSUs in this case) * Case Fans: 4 × Noctua NF-A6x25 * Top Fans: 3 × ARCTIC P8 Slim PWM * Fan Hub: Noctua NA-FH1, 8 Channel Fan Hub * WiFi Antennas (I don’t have Ethernet near my TV) The Fractal Ridge case is about the size of a deep PS5 and fits nicely behind TVs and on mantles. Airflow around the case hasn't been an issue. When running heat-producing games, I pull the TV off the wall to give more space between the panel and the heat exhaust. ### The controllers Finding good controllers has been a challenge. It needs to be on par with the Steam Deck controller, but unfortunately, nothing on the market matches that. Skipping a lot of nuance and details, the Steam Controller, DualSense Edge, and an HTPC keyboard have covered all my needs. Finding decent controllers that work well enough for PC games on the couch has been difficult, and I intend to write more about each one I’ve tried. Steam Deck controls for reference. Why isn't there any PC controller on the market with these inputs? It’s surprising and hard to explain to people unfamiliar with the issue just how important of a development Gyro Aim is. This thesis is required watching if you haven’t had hands-on time with a Gyro input: People have been discussing the importance of motion control for over 6 years now! The only controller that comes close to what is offered by the Steam Deck controls is the DualSense Edge (unfortunately!). It’s a great controller, with Gyro, paddles, and a trackpad, and has high-end OEM quality, but it’s the most expensive controller on the market—and Bluetooth only (among other flaws). Speaking of Bluetooth, I recommend getting an extended range dongle and ensuring your controller has line of sight to it. Latency-sensitive controls like Gyro need a solid connection, and I found that without line of sight, signal drops off around 6 feet. With line of sight, controllers work great—even at 10 feet. The Steam Controller is nice to have. Probably not worth paying scalper prices for it these days, but if you still have yours lying around, get that thing going again. They work better than you remember and have a 2.4GHz dongle! Any HTPC keyboard-mouse combo works. You mainly use it for one-off tasks where the on-screen keyboard is too cumbersome, and it mostly lives in the closet. I recommend the Logitech K400 Plus over the no-name Chinese Amazon ones, having tried both. * PlayStation DualSense Edge Wireless Controller * PlayStation DualSense Charging Station * PlayStation DualSense Edge Stick Module – Pick some up before they go dead-stock pricing. * TP-Link USB Bluetooth Adapter for PC, Bluetooth 5.3 Long Range Receiver * Logitech K400 Plus Wireless Touch TV Keyboard * Steam Controller (RIP) Steam Controller is good for mouse pointer games. DualSense Edge is good for FPS games. The Logitech keyboard is helpful for typing and noodling around in the console. Controllers with Gyro and paddles are critical. Gyro gives you "mouse-like" input; paddles let you work sticks and buttons without letting off the analog sticks. It's this combo that allows for input that matches keyboard and mouse on your couch. The DualSense Edge runs out of battery after a few hours (thanks rumble), so keeping it charged is important. The Steam Controller happens to live nicely on the dock as well, though no charging. Controllers with 2.4GHz dongles have better range and reliability than Bluetooth. Either way, line of sight is super important, so getting the dongles out from behind any obstructions is key. The Steam Controller included a dock for its dongle. Use it! Bluetooth has the worst range and is the slowest to connect. An extended Bluetooth antenna with line of sight has helped allow the DualSense Edge to work at 10 feet. If your controller can "see" the receiver, everything works better. The perpendicular power cables probably aren't helping, though. ### The Screen Seeing games on large 4K HDR formats is something else. Whatever TV you have will work. Playing on a large 4K HDR OLED TV has been a real joy, and it’s easy to forget how many pixels are in these things. Just don’t forget to put it into game mode. I would also note—if you can, avoid Android TV, and definitely disconnect the smart features from any network connection. ### The Audio I can’t run surround in my current living room, so in the meantime, I run AirPods Max—mainly because they have the best audio sharing feature on the Apple TV. They work well with the Steam Machine though. ## Build notes Building in the Fractal Ridge was super easy, but seeing how other builds tackled little issues in the small case was helpful. Here are the builds I found most useful as a reference: * Fractal Design Ridge full of Noctua fans feat. delidded 7800X3D and deshrouded RTX 4080 * ITX laptop destroyer * All Fractal Ridge Completed Builds I ran the ITX power cable behind the motherboard, similar to other Ridge builds. Worked well! Rear I/O with the stubby Wi-Fi antennas. I ended up getting larger antennas than these. With the stock GPU shroud, I could only fit 2 Noctua NF-A6x25 fans directly below the GPU chamber. The Noctua NH-L12S cooler orientation works great, and the audio cable routes along the side and bottom easily. Zip ties are helpful for routing the I/O and power bundle cables behind the PSU, keeping open space around the CPU cooler. By routing only the power cable below the CPU cooler, you have room for 2 additional Noctua NF-A6x25 fans to intake cool air below the CPU cooler intake. One of the fan tabs interfered with the power cable, so I used a Dremel to shave it down to reduce contact and pressure. Other people use 3D-printed extenders found on Etsy. Cable routing below the PSU. This is an odd place to exhaust the PSU—I considered doing a PSU fan reverse but decided against it for now. Bottom of the case with intake fans installed. The front side of the case with the GPU installed. The ARCTIC P8 Slim PWM fans fit above a full-size GPU and stock shroud, despite what the Ridge manual says. The assembled mini ITX motherboard and comically large CPU cooler. The top-down view below the CPU cooler. The top view of the motherboard and cooler. The thing really is the size of the entire motherboard. ## Other notes If you are familiar with the Steam Deck, you will feel comfortable with a Steam Machine. They are basically the same! The following are helpful resources when trying to run games on it: * ProtonDB – Compatibility reports for games running on Linux in Proton. Any Deck Verified game will also run great. * Are We Anti-Cheat Yet? – If you must play multiplayer with cheating competitors who require kernel modules to stop them, you will run into some compatibility issues versus Windows. This site tracks those. * GamingOnLinux – This has consistently been the best news site focusing on gaming on Linux. * GoL AntiCheat Tracker - GoL also has it’s own excellet data on Linux anti-cheat stats. * SteamDB – General player and game price tracking on Steam. * /r/GyroGaming/ – The GyroGaming subreddit can often be helpful when figuring out gyro on games with poor mouse and controller inputs. * /r/SteamDeck/ - The SteamDeck subreddit is also a decent source of news fore SteamOS related info. If you end up building a Steam Machine or something similar, please share your results! If you want to chat or ask more questions about the process, you can join the former SteamFork Discord, where there are still a bunch of users of SteamFork migrating to SteamOS and facing similar issues and questions. ### Syndications * /r/SteamDeck
bret.io
May 9, 2025 at 6:19 AM
Is there a DockKit gimbal that works with the Apple TV FaceTime?
April 25, 2025 at 3:46 AM
Reposted by Bret Comnes
I wrote another new go tool called goversion.

It's like npm version for go, and it works with the new go tool directive.

https://github.com/bcomnes/goversion
GitHub - bcomnes/goversion: A tool for creating version commits consistency on go tools
A tool for creating version commits consistency on go tools - bcomnes/goversion
github.com
April 17, 2025 at 4:32 AM
I wrote another new go tool called goversion.

It's like npm version for go, and it works with the new go tool directive.

https://github.com/bcomnes/goversion
GitHub - bcomnes/goversion: A tool for creating version commits consistency on go tools
A tool for creating version commits consistency on go tools - bcomnes/goversion
github.com
April 17, 2025 at 4:32 AM
Happy 900!

deploy-to-neocities helping over 900 of you deploy personal websites to the real web on @neocities.org. Onward and upward 🚀
March 21, 2025 at 2:34 AM
I wrote a new blogpost: "I Love Monorepos—Except When They Are Annoying"

https://bret.io/blog/2025/i-love-monorepos/
I Love Monorepos—Except When They Are Annoying
I love monorepos, but monorepos can be annoying, especially in open source. They make sense in some cases, but they come with a lot of baggage and patterns I’ve noticed over the years—so I need to write about them. I’m primarily talking about JS-based “monorepos,” a.k.a. workspaces when used in open source packages, but the whole space is confused enough that I might stray a bit. ## Historical Context JS monorepos (or “workspaces”) emerged with tools like `lerna`, later influencing similar features in `npm`, `yarn` and `pnpm`. At their core, they allow developers to: * Develop and publish multiple `npm` packages from a single `git` repository * Streamline dependency management with automatic linking and consolidated lockfiles * Allow for varying direct dependency versions in a single repo This approach gained popularity largely as a response to: * The frustrating fragility of `npm link` * React’s “unique” constraints that caused errors when linked across packages * The exponential growth of tooling complexity costs (Babel, Webpack, CSS-in-JS, TS) * The promise of O(1) tooling changes instead of O(n²) updates across multiple repositories Babel is probably the most appropriately named project in open source history. ## What I Love About Monorepos Monorepos have utility in some circumstances. Monorepos are ideal for scenarios like this: a project with a series of APIs, a website, and background worker processes where a team of developers works on the codebase together Each process has its own set of unique and shared dependencies, and also has a set of common queries and types shared between the two services. The primary trade-off of course is that any changes to shared code has to be reflected in all dependents at the time of introduction. Outside of this context, it’s mostly just misery for dependents and contributors. ## All the Ways Monorepos (and Adjacent Hypertooling) Are Annoying Most of the issues stem from “hypertooling”, a generally bigger issue in all development ecosystems, but they manifest at scale in the monorepo arrangement, so it’s a useful vehicle to point out these issues. ### “Let Me Just Fix This Little Bug” You’re using a dependency in your project. That dependency has a bug, and you need to fix it. Node.js was designed with the intention that you could just open up `node_modules` and edit the code to generate patches. With packages sourced from monorepos, this is not the case! You open up the code in `node_modules` now, and it’s some franken-compile-to-es-1-ts-rollupviteparcel-webpack-babeldegook-lerna-pnpm-berry-workzone-playplace that has also been pre-minified for some reason. Also the sourcemaps and ESM type exports are broken for some reason. ### Packages Published from Monorepos Have More Bugs This is completely anecdotal but also completely true. Packages published from monorepos have more defects, and finding and fixing the defects is more challenging for dependents. I believe this is due to two factors: * The development environment (the monorepo) varies more from the deployment environment (`node_modules/foo`) than single repo = single package project organization. * The developer who is prioritizing the monorepo DX over the consumption DX, has gone out of their way to avoid working in the deployment environment, and therefore fails to test things in realistic deployments. These two factors, plus the inherent complexity of all the tools required to make monorepos work, lead to encountering more defects in monorepo-published packages. Also, trying to fix or upstream work to monorepo packages is memorably more miserable and painful. This really comes down to thermodynamics—more entropy, more problems—and it’s true! ### Finding the Source Code to Fix Is a Lot Harder Noodling around on your machine-generated direct runtime dependencies in `node_modules` may still be possible, and you may even identify a quick fix you want to upstream, but now you’re tasked with actually finding the source code. This leads to many additional challenges in monorepos! ### Package Metadata Is Often Stripped from `package.json` Because you can’t simply publish packages from a monorepo without a mountain of scripts and tooling, monorepo-sourced packages often rewrite `package.json` (we have to differentiate which `package.json` we’re talking about in a monorepo!) in a way that accidentally (or intentionally, for devs who prefer to move a bespoke minification step into the `npm publish` lifecycle for no stated reason) strips useful and important metadata. ### Package Metadata Is Often Wrong or Incomplete Okay, so we’re lucky—this monorepo-published package has some metadata about the repo that created it. But it only takes us to the repo homepage. Now we have to find out if the package name matches the directory name used in the monorepo or how this thing is put together at all to hunt down the source code of the package. ### They probably don’t have a README.md The package probably has a super sub-par README.md or something that points to some random permutation of a (probably incomplete) docs website (that will go offline when the maintainer gets busy and forgets to renew the domain). If you are lucky you might get a generated typedoc website (good), but with zero JSDoc description annotations (the part that describes things for humans) (bad). Obviously, nothing in monorepos requires this to be the case, but the tools seem to facilitate this outcome. ### Each Monorepo Is a Unique Permutation of Opinion and Entropy Because there are always weird variations on which tool is used and how, finding the package entry point becomes a chore. It’s hard to do on GitHub, and you basically have to clone the package and grep around, comparing the source code to try and find where the contents of the package tarball match up. ### Which Package Manager Are They Using for the Monorepo? Okay, so the package itself has no hard requirements on which package manager you use, but the monorepo only works with `pnpm`. No, wait, it requires `yarn`. Oh, wait, not Yarn 1—why is that still the default? It needs Berry. Why does this repo have more than one lockfile?!? Oh crap, what is `corepack`, do I need that? ### Now You Have to Install More Tools `corepack`, `yarn`, `berry`, `pnpm`, `lerna`, `volta`, `turborepo`, `nx`, etc., etc., etc… By the way, Node.js ships with `npm`. It literally could be that easy—this is all opt-in hypertooling. ### No One Uses the Standard Node.js Tooling (`npm` Workspaces) `npm` ships workspaces. Nothing uses them. To their credit, `npm` workspaces leave a lot to be desired. Tools that support `workspaces` will often only work with `lerna` 1 workspaces or something like that, not `npm` workspaces for some reason. Sad situation. ### The Monorepo Install Step Will Probably Fail Because you have to install dependencies for N packages instead of 1 in a monorepo, and the chances of a monorepo dev running Gentoo or nix or something elite and weird are much higher than normal, don’t expect this to work on your machine. The external native dependencies are probably not documented anywhere, or buried in a README in one of the packages.! Remember, at scale, rare events become common! More dependencies, more places to break. ### The Tests Aren’t Passing Locally We got through the install step. We had to switch Node versions or install `pkgconfig` or something. We go to land our patch, but before we do, we run `yarn test`. The tests fail! Not on the package we want to work on, but somewhere else. Now we get to look into how to narrow the test harness and see if the relevant suite works or not, or just submit the patch and hope it works in CI. ### The Tests Aren’t Passing in CI We submit the PR upstream, and CI fails—again, for the same unrelated package that we saw locally. I’m not here for that, and fixing it looks hairy. The maintainer merges your changes anyway. Oof. Let’s hope they have it under control despite appearances. Isolated changes aren’t actually isolated at all in monorepos. They end up requiring the whole suite to pass. Depending on the nature of your contribution, it may come down to you to look into why something stopped working, unrelated to the task at hand. ### All of the Devtools Fall Over The scale of monorepos will often far exceed the performance envelope that the devtools were targeting (small to medium-sized repos). JS and TS require dev tooling. JS requires a parsing linter to catch well-known but easy-to-miss language hazards. TS requires tools to type-check and build. These tools operate fine at a specific range of scale and get extremely slow and crappy beyond that scale. Monorepos are an excellent pattern to follow if you want to exceed that scale quickly. ### Needs Everything Rewritten in Rust A big part of the effort to rewrite everything in Rust is because the JS-based tooling isn’t fast enough for the size of the monorepos people throw them at. But also, people are just sick of the mess they’ve been a part of and want to hop ecosystems. Many monorepos are quick to adopt Rust-based tools, along with all of their fresh bugs and defects. It’s a good time to remind people that tools like the SASS compiler used to be written in C++ to be fast. We’ve been here before! ### They Need VSCode Plugins Many monorepos assume and encourage people to not only install devDependencies but also VSCode plugins to work effectively in them. No, it’s not available in Vim or Sublime or any other editors. What, you don’t use VSCode?! ### Monorepo Tooling Falls Out of Date Quickly Maybe this is getting better these days, but why do I keep running into Yarn 1 and Lerna everywhere still? Because any singular tooling change in a monorepo has to cover the workflow for N packages, it forces you to address any changes in ALL packages when making tooling updates. This often leads to it never happening. Remember the argument that monorepos promised O(1) tooling changes? Well, that one change can’t go in until N packages (* N times you have to make updates) are modified to work with that change. This distinction is always overlooked. Centralizing tooling means every change requires mass coordination. If each package were in its own repo, you could selectively apply the tooling changes to the 2-3 you are actively working on and get around to the rest when it matters. ### They Ship Hoisting Bugs Hoisting bugs are more common in monorepos and are easily captured in lockfiles, where they can’t be reproduced by dependents. Why install any dependencies when you can assume your peers have them?!? `pnpm` forces you to fix these—great—but `pnpm` doesn’t ship with Node, so only a fraction of monorepos address this. ### Versioning and Publishing Hazards Monorepos inadvertently create several versioning challenges that single-package repos typically don’t encounter: * **Overactive Versioning** : Tooling automatically bumps multiple packages simultaneously, leading to unnecessary version noise for downstream dependents. * **Hazardous Permutations** : When packages are published in groups but updated selectively by dependents, untested version permutations emerge—especially problematic with peer dependencies. * **Partial Publishes** : Complex automations sometimes only publish a portion of interdependent changes, creating temporarily broken package states. * **Cross-Module Side Effects** : Changes in unrelated modules in monorepos can introduce defects in the modules you depend on, something far less likely with separate repositories. ### Overmodularized internals Overmodularizing (adding versioned module boundaries between code where just a separate file or export in the same module would do) is a hazard in general, but it seems to often be worse in monorepos. This tends to be a mistake you see less experienced developers make, but monorepos deserve unique recognition here: by lowering the spin-up cost of modules, monorepos make this mistake easier and more common. ### Probably overusing peerDependencies I actually don’t understand this one, but modules out of monorepos tend to heavily utilize peer dependencies, where regular dependencies would actually be preferable. I suspect its some frontend bundler need that somehow has leaked into the Node.js module graph but I haven’t ever gotten an answer that makes sense on this one. ### Monorepos Break GitHub and Tooling Because N projects run out of one repo, the entire GitHub resource model (One project = one repo) is made largely useless. Issues, CI, and permissions now have to scale down to the folder level instead of hanging off the repo resource boundary. This has incredible implementation costs for the entire tooling ecosystem as they attempt to accommodate large monorepos. Most tools just simply fail to work by default in the monorepo arrangement because your monorepo is unique and bespoke compared to all the others. Because of its sheer size, tools have to implement complex scaling solutions just to listen to webhooks off monorepos. It really sucks. ### “But Google Does It!” This comparison fundamentally misunderstands Google’s approach. Google doesn’t use JS-based workspaces or monorepos in their organization (at least in the example everyone’s reaching for)—they’ve built custom tooling with dedicated engineering teams specifically to make their monorepo approach viable at their scale. Google has invested millions in proprietary build systems and infrastructure that most teams simply don’t have access to. The contexts are so different that the comparison provides little practical value for most JavaScript projects. Your startup or open-source project operates under completely different constraints and with different goals than Google’s engineering organization, on top of the fact Google isn’t really a great company to emulate these days. ### “But `npm link` Sucks” “I don’t want to link 2+ repos together locally.” This is a legitimate pain point—`npm link` becomes tedious and fragile beyond a single layer of linking. However, this limitation actually encourages better architectural decisions about module boundaries and dependencies. The core issue isn’t that `npm link` is flawed; it’s that we’re often creating unnecessary dependencies between packages that could be designed with cleaner separation. When packages are truly independent enough to warrant separate publishing, they should rarely need simultaneous development. For general-purpose libraries especially, isolating code into separate repositories with well-defined boundaries often leads to better design decisions and more maintainable code over time. Reaching for monorepos to avoid these challenges can sometimes mask architectural problems rather than solve them. That said, every project has unique requirements—if yours genuinely benefits from the tight coupling a monorepo enables, that’s a valid choice. The key is making that decision deliberately rather than defaulting to it out of convenience. ### “But Small Modules Are Annoying” Monorepos and many/deep module graphs are pretty orthogonal, but I have heard this argument a few times. The idea is that it’s okay to have many dependencies sourced from the same repo—this is better than having them sourced from many repos. Okay, sure, as long as you can live with the above issues! If all those repos are owned by the same person, I don’t really see the issue. Generally though, small modules aren’t annoying because they are small, (they are annoying because their they lack API depth). Annoying modules are annoying Get rid of your annoying dependencies, and cross your fingers the replacement is less annoying. ### “All of These Problems Apply to Single-Package Repos Too!” A lot of the above problems are just hazards with the Node module system. Yes, you can run into a lot of the same issues with single-package repos. But in practice, you don’t. Monorepos act as a multiplier on these hazards on top of their own set of issues. ### “[Insert New Runtime] Fixes This!” Give any JS ecosystem incumbent some time in the spotlight, and you will be surprised at the “wild” ideas people will come up with to make peoples lives more complicated! ### “I’m an overworked, underpaid maintainer, I need this” This is probably true. Do whatever you need. Just enumerating a few common hazards to avoid. ### “You or someone should write up single package repo hazards” Yeah Agreed. Single module repo strategies are sadly very underdeveloped and misunderstood. ## Conclusion Monorepos have legitimate uses in specific contexts—particularly when sharing code between multiple processes in a single project or coordinating work across closely related sub-projects and teams. In these situations, they can remove barriers to a developer workflow that would otherwise be necessary in open source. But for open-source modules, the costs often outweigh the benefits and are actually creating a reputational hazard for an otherwise completely functional and scalable module system. Instead of defaulting to monorepos, consider these alternatives: * **Focused Single-Package Repos** : For libraries with a clear, cohesive purpose, maintaining separate repositories provides cleaner boundaries and more reliable publishing workflows. * **Minimal Dependencies** : Rather than splitting functionality across numerous tiny packages that require a monorepo to manage, consider whether your design truly benefits from such granular separation. * **Strategic Module Boundaries** : Create module boundaries only where they provide genuine benefits—at natural seams in your architecture rather than arbitrary divisions. Frequent cross boundary linking indicates unnecessary boundaries. The JavaScript ecosystem moves quickly, but we should be careful not to adopt complex solutions for problems that could be solved more elegantly with simpler approaches. Sometimes the answer isn’t more tooling or more packages—it’s thoughtful design and careful consideration of the downstream experience. Monorepos aren’t inherently bad, but they’re also not a silver bullet. Understanding when they help and when they hinder is key to using them effectively.
bret.io
March 9, 2025 at 10:27 PM
Getting porn bots on masto now
February 20, 2025 at 2:15 AM
Reposted by Bret Comnes
Breadcrum running on a DalightCo DC-1
January 3, 2025 at 1:16 AM
Reposted by Bret Comnes
January 1, 2025 at 12:02 AM
Reposted by Bret Comnes
Like podcasts? Like video? Please check out breadcrum so I can make 100 users in 2024! Only one more signup to go!
December 31, 2024 at 3:57 AM
Reposted by Bret Comnes
Happy 10k bookmarks!
December 23, 2024 at 11:38 PM
Best macos clipboard manager that works well on macOS 15.2?
December 19, 2024 at 12:29 AM
Does anyone have a good example of an api backed by a db and other resources with the following

- strict schema based deserialization/serialization on endpoints
- ts type checking between code and that schema layer
- db types flexible enough to work those schema types
December 8, 2024 at 5:47 PM
Reposted by Bret Comnes
Gearing up for the drive home?

Watch videos and audio from anywhere around web in your favorite podcast app!

Still FREE while in early access!
November 29, 2024 at 5:09 PM
Any testimonies from someone who's tried switching personal 1password use to apple Passwords app?
November 28, 2024 at 1:09 AM