But honestly, most LLM papers are merely stating an *observation* and dressing it up as a hypothesis.
But honestly, most LLM papers are merely stating an *observation* and dressing it up as a hypothesis.
But leads to the question “what is the optimal prompt”?
You could jitter that point in latent space until you overfit the task, but I’m not sure that’s super informative either.
Ultimately what we need is deeper theoretical foundations.
But leads to the question “what is the optimal prompt”?
You could jitter that point in latent space until you overfit the task, but I’m not sure that’s super informative either.
Ultimately what we need is deeper theoretical foundations.