One thing to consider is portability. Machine code is denser than source code, but I'd bet cross compiling source code to 50 distros is far cheaper from a compute perspective.
One thing to consider is portability. Machine code is denser than source code, but I'd bet cross compiling source code to 50 distros is far cheaper from a compute perspective.
I am an expert 😂 and while I trust LLMs for many things, me and most of my friends very much would not trust an LLM machine code output.
I am an expert 😂 and while I trust LLMs for many things, me and most of my friends very much would not trust an LLM machine code output.
1) we don’t trust the LLM enough. We want to review the code.
2) high level languages give you a higher density of expression per token. i.e. it takes less tokens so you get faster answers
[1] chatgpt.com/share/674db3...
1) we don’t trust the LLM enough. We want to review the code.
2) high level languages give you a higher density of expression per token. i.e. it takes less tokens so you get faster answers
[1] chatgpt.com/share/674db3...
Do you run any benchmarks against your default prompt templates, and have you published them so others can compare different models or prompt/template tweaks?
Do you run any benchmarks against your default prompt templates, and have you published them so others can compare different models or prompt/template tweaks?
But I’m a special case.
I think the real unlock is going to be agents. This promise still hasn’t been realized.
But I’m a special case.
I think the real unlock is going to be agents. This promise still hasn’t been realized.