Kevin Burns
burnskp.com
Kevin Burns
@burnskp.com
Tech generalist working in the HPC, AI, and Security space.

My other computer is a Cray.

#linux #kubernetes #devops #infosec #pinball
I do believe the big beautiful bill rolled back the section 174 changes, but I haven’t seen the part of the bill that does it.

Not sure it matters much all that much due to the other negative economy impacts happening.
August 9, 2025 at 9:53 PM
If you take the theory of constraints into account, a system always has a single bottleneck and any improvements outside of that bottleneck is an illusion. IMO, AI will cause work in progress to pile up at the bottleneck and decrease the overall output of the system. Local optima can cause harm.
August 9, 2025 at 9:37 PM
That study is tiny and everything is a variable. I don’t see how its concussions can be worth anything. I’m hoping we see better studies soon. IMO it’ll be a speed boost to the code generation aspect. However, the value pipeline is long and the code generation piece is a small piece.
August 9, 2025 at 9:37 PM
Also, I suggest testing out a moe model like one of the new qwen3 releases. Ethernet vs thunderbolt also has an interesting set of trade offs, since thunderbolt tends to add more latency. I’m unsure if your workload is more network latency sensitive or bandwidth restricted.
August 7, 2025 at 6:02 PM
Just watched it. Have you taken look at vLLM? That’s the industry standard for cluster/multi gpu inference. There’s also sglang.
August 7, 2025 at 5:56 PM
Wonder if there’s any studies on the rate of diabetes in people who eat gluten free.
August 7, 2025 at 5:04 PM
Always interesting how different things can impact your thinking and actions. How little we actually have control.

I’m gluten free and probably should be on a low histamine diet. I wouldn’t recommended it to anyone without my condition. Much harder to get nutrients and much easier to spike glucose
August 7, 2025 at 5:01 PM
I have your video queued so I haven’t watched it yet… that said I’m debating taking a crack at writing a LLM performance benchmark. I feel that most of the benchmarks I see focus on small prompts and report those numbers, ignoring that they go down as tokens go up and that a lot of this isn’t viable
August 7, 2025 at 4:54 PM
Sounds like an advertisement for Molly*.

*I have no idea the security of the Molly signal fork beyond that it provides signal database encryption, which may not matter depending on the compromise.
August 7, 2025 at 4:48 PM
I’ve never worked for a SV company. The places I’ve worked ranged from “don’t talk to anyone outside your own team” to “we wish people would talk more, but our culture doesn’t encourage it”. At one place anything potentially useful would require a lawyer and probably a trade secret vs sharing online
August 4, 2025 at 2:53 PM
It does feel like we’re living on hopes and dreams.
The increasing pooling of money towards the top also creates weird incentives where businesses need to cater to the top % to make a profit because the rest of the people are trying to survive without disposable income Doesn’t seem sustainable.
August 1, 2025 at 4:57 PM