m-bernstorff.bsky.social
@m-bernstorff.bsky.social
Ah, totally, if interpreter startup time dominates then it's not a good match, unless you can scale the computation and subtract the constant term :-)
January 3, 2026 at 12:37 PM
Loved reading this, Per!

Re: "Expand the benchmark to be a true benchmark by e.g., averaging multiple calculations etc. As it stands, it just gives an idea of the performance."

I cannot recommend hyperfine highly enough: github.com/sharkdp/hype...
GitHub - sharkdp/hyperfine: A command-line benchmarking tool
A command-line benchmarking tool. Contribute to sharkdp/hyperfine development by creating an account on GitHub.
github.com
January 1, 2026 at 5:39 PM
Extremely neat!
September 19, 2025 at 6:32 AM
Ah, I see!

> But I almost never refer to them

In the sense that the notes are more like "scratch notes"/temporary artifacts that you use for thinking, rather than a permanent 'library'?

**Absolutely** agree on the talking to friends/introspection/application. No good ideas without good feedback
September 9, 2025 at 6:20 AM
Reflecting on this, how do you best "reflect on your experiments"?

I use note-taking a lot to reflect on an experiment over time.
September 8, 2025 at 6:38 AM
But again, I strongly agree that the litmus test should be "how will this help you achieve what you want".
August 19, 2025 at 7:12 AM
Or, currently, as a SWE working on distributed systems, studying durable execution from e.g. Temporal and memorizing that "effectively once" execution requires 1) Persisted spec, 2) Retries and 3) Idempotency, has come in _super_ handy the few times i've needed it.
August 19, 2025 at 7:12 AM
E.g. when I was in acute care, being able to rattle off treatments for a given disease came in super helpful, and using note-taking + spaced repetition (Matuschak's mnemonic medium) made memorisation and categorisation much easier.
August 19, 2025 at 7:12 AM
I strongly agree with everything you've written, but I think note-taking has utility in domains with high need of declarative knowledge, especially under time-pressure.
August 19, 2025 at 7:12 AM
But, I mean, they could, right?

Design a good training program using NDM-style methods, randomise students to either "current" training program or NDM-style training, compare results using hypothesis-testing (or insert preferred statistical methods here)?
June 8, 2025 at 8:32 AM
Thanks again for the thread! I've been thinking some more, and found this quote of yours:

> The problem with NDM style training methods is that it’s ethnographic in nature. [...] Also they don’t do null hypothesis statistical testing, so are locked out of mainstream journals.
Cedric Chin on X: "@sean_a_mcclure @justinskycak For the folks who want to attack Sean, this is not that crazy a stance to take. The researchers of Accelerated Expertise spend the entire lit review pointing out that much of learning research fails to generalise outside of specific classroom conditions. Very few DARPA grants https://t.co/mBkGvQaV4Q" / X
@sean_a_mcclure @justinskycak For the folks who want to attack Sean, this is not that crazy a stance to take. The researchers of Accelerated Expertise spend the entire lit review pointing out that much of learning research fails to generalise outside of specific classroom conditions. Very few DARPA grants https://t.co/mBkGvQaV4Q
x.com
June 8, 2025 at 8:31 AM
Have bookmarked this and will give it deeper thought! Love your writings.

Sidenote: So incredibly happy you're posting on BlueSky as well! Much less noisy in my algorithm.
March 20, 2025 at 9:45 AM
This was one of my most formative experiences in my PhD!

Some examples:
* Polyrepo -> monorepo
* Static typing for configuration -> Runtime with Confection

Looking at some code, seeing it as a teachable moment for a junior, and seeing that _I_ wrote it, was very humbling.
January 13, 2025 at 2:55 PM
In the same vein, GitHub notification filtering is such a dumpster fire that most devs I know regularly miss important notifications because they are drowning in "dependabot merged this PR".
December 5, 2024 at 6:59 AM