Harry Brundage
airhorns.bsky.social
Harry Brundage
@airhorns.bsky.social
Developer developer. CTO at https://gadget.dev working on bringing some speed back to makin' software. Previously at Shopify
Apollo 11 was 1969, where is software engineering’s claim to fame?
October 14, 2025 at 2:00 PM
Temporal is super neurotic about determinism which creates a lot of friction for me, and these systems seem to have taken a much more pragmatic approach. I wonder if win or huge burn
June 9, 2025 at 2:00 PM
Narry a spanner in sight!

Only exception is etcd for k8s coordination, which, sadly, is high throughput at Gadget also
June 7, 2025 at 2:00 PM
For real though, Temporal, turbopuffer, alloydb, bigtable, kvrocks, Postgres/AlloyDB are our weapons of choice at Gadget — all work this way.

If you want hella scalable performance, you shard horizontally and prevent the shards from having to co-ordinate on the data plane.
June 7, 2025 at 2:00 PM
Stand by it still
June 6, 2025 at 1:57 PM
We do this second one in github.com/gadget-inc/... a lot and it sucks -- a lot of nasty string => code grossness but it really, really works.
GitHub - gadget-inc/mobx-quick-tree: A mirror of the mobx-state-tree API that supports creating a fast, non-reactive, read-only tree
A mirror of the mobx-state-tree API that supports creating a fast, non-reactive, read-only tree - gadget-inc/mobx-quick-tree
github.com
June 5, 2025 at 2:00 PM
There’s only two ways I really know how do that: write out type-specific versions of functions by hand, with individual variants that only ever work with the same types. Or, manually craft and eval these variants dynamically at runtime for a lightweight JIT of your own
June 5, 2025 at 2:00 PM
The fix is to make monomorphic code, which means each expression only ever takes on one type so the JIT can be aggressive. I use github.com/thlorenz/de... for profiling to find megamorphic callsites.
GitHub - thlorenz/deoptigate: ⏱️ Investigates v8/Node.js function deoptimizations.
⏱️ Investigates v8/Node.js function deoptimizations. - thlorenz/deoptigate
github.com
June 5, 2025 at 2:00 PM
Different values of different types pass through them all the time, so v8’s JIT can’t make assumptions and optimize stuff away and has to leave slower code that checks the type to make sure things are as it expects or if there are many different callees
June 5, 2025 at 2:00 PM
The one line of code within debounce that calls the function you passed in is always on the same line, but if you've debounced more than one function, that callsite calls one of many of the functions you've passed it -- the type of the call is not static every time that expression is evaluated.
June 5, 2025 at 2:00 PM
But generic wrapper functions or utility functions like denounce, memoize, and pretty much all the other stuff in lodash don’t have consistent input types because they are used and reused in a wide variety of different contexts!
June 5, 2025 at 2:00 PM
Usually if you’re writing concrete functions, like leftPad or what have you, the input types and intermediate expressions all always have the same type when executed (String), which leads to good optimization
June 5, 2025 at 2:00 PM
In V8 at least, function callsite optimizations make a HUGE difference, but AFAIK, the optimizations are applied to lexical callsites either entirely or not at all. Function call overhead can only be removed fully if the types of the variables flowing through a call site are always the same
June 5, 2025 at 2:00 PM