Side note - I'd be interested in replicability here. Is the model still essentially taking a random walk or is there a level of quality (or more realistically higher weights) that consistently gets to better and/or consistent outcomes through reasoning.
January 22, 2025 at 3:44 AM
Side note - I'd be interested in replicability here. Is the model still essentially taking a random walk or is there a level of quality (or more realistically higher weights) that consistently gets to better and/or consistent outcomes through reasoning.
This doesn't seem to be what's now going on in your example. It actually does seem to be plotting its own path forward to a response. Is there prompting (or an initial instruction) that is hidden and directing this reasoning? Or is it baked into the model somehow?
January 22, 2025 at 3:44 AM
This doesn't seem to be what's now going on in your example. It actually does seem to be plotting its own path forward to a response. Is there prompting (or an initial instruction) that is hidden and directing this reasoning? Or is it baked into the model somehow?
I haven't been reading a lot lately on this, but ~12m ago it seemed asking an LLM to reason post-response would be convincing but also obviously not actually the reasoning that led to the initial response. A mental model was "here is a point at the base of a hill, now make up your path from the top"
January 22, 2025 at 3:44 AM
I haven't been reading a lot lately on this, but ~12m ago it seemed asking an LLM to reason post-response would be convincing but also obviously not actually the reasoning that led to the initial response. A mental model was "here is a point at the base of a hill, now make up your path from the top"
No I definitely read the post. (Maybe don’t jump to conclusions?) The author is reporting on people not using AI for real-time information on an emergency. He then links that to a lack of trust in AI. He should be linking it to a lack of suitability of AI tools for that task.
January 17, 2025 at 11:57 PM
No I definitely read the post. (Maybe don’t jump to conclusions?) The author is reporting on people not using AI for real-time information on an emergency. He then links that to a lack of trust in AI. He should be linking it to a lack of suitability of AI tools for that task.
Just reminded me of a rare good tech April fools, when Soundcloud “introduced” the new “feature” which was a big arrow saying “Here’s the Drop”. www.reddit.com/r/DJs/commen...
Just reminded me of a rare good tech April fools, when Soundcloud “introduced” the new “feature” which was a big arrow saying “Here’s the Drop”. www.reddit.com/r/DJs/commen...
I’ve been enjoying your veo experiments. I’m interested (and can’t really come up with a good answer myself) on how useful these text to video models are. Besides playing around with them, where do you see the value in the long term? (that justifies their expense to both build and run)
January 12, 2025 at 8:09 AM
I’ve been enjoying your veo experiments. I’m interested (and can’t really come up with a good answer myself) on how useful these text to video models are. Besides playing around with them, where do you see the value in the long term? (that justifies their expense to both build and run)