Alex McNamara
banner
amcnamara.me
Alex McNamara
@amcnamara.me
I'm not convinced that existing complex codebases can be decomposed to the point where these tools can be useful though, sometimes the existing system is just too far gone.

Though it begs the question on whether we can develop an engineering discipline around simplification.
December 30, 2025 at 8:14 PM
/double_facepalm
July 16, 2025 at 2:32 PM
I even threw the conversation thread into the internet archive, so that some poor future anthropology student doesn't get their thesis rejected by a GitHub 404 page: archive.is/gN7qe
archive.is
June 28, 2025 at 2:59 PM
The longer horizon is even more exciting, though all of the IMEC predictions must be hiding some pretty enormous error bars.
June 21, 2025 at 3:43 AM
I'm bummed that Intel bailed on 20A, it would have been huge win for them to land RibbonFETs with Arrow Lake. But hopefully 18A will get over the line this year, and TSMC N2 should also be landing GAA FETs in consumer processors around the same time.
June 21, 2025 at 3:41 AM
emacs, once you get past the first decade.
June 12, 2025 at 9:23 PM
This would also be the case if you assume OpenAI trained GPT-4 specifically to fit into the DGX H200, it's just neat to come to a similar conclusion from a different starting point.
June 11, 2025 at 7:03 AM
So assuming ChatGPT is saturating 8x H200s, that gives us 1128gb of memory and if it's running 16-bit weights it means GPT-4 is running 564B parameters.
June 11, 2025 at 6:54 AM
If we assume an average query takes 5 seconds, and they process inference with a batch size of 32 we get 23k requests per hour at 0.00034kWh each we get 7.8kW. That's mighty close to a single DGX H200 server.
June 11, 2025 at 6:51 AM