FWIW, doing CoT in this fashion is common and an officially documented pattern in OpenAI's case.
FWIW, doing CoT in this fashion is common and an officially documented pattern in OpenAI's case.
Are there other ways it may confuse the model?
Are there other ways it may confuse the model?
That's true for some stuff like DMs and trade secrets, but in general, it's actually good to influence others.
That's true for some stuff like DMs and trade secrets, but in general, it's actually good to influence others.
So then for the argument that we can create any explanation any other being could create does seem to require that there be a bound on how complex the juicy part of any explanation could be, but I don't see why we should expect there to be any such bound.
So then for the argument that we can create any explanation any other being could create does seem to require that there be a bound on how complex the juicy part of any explanation could be, but I don't see why we should expect there to be any such bound.
If the explanation is simple, but deriving it requires processing a lot of data or proving a theorem with lots of tedious but explanation-irrelevant branches, that makes sense.
If the explanation is simple, but deriving it requires processing a lot of data or proving a theorem with lots of tedious but explanation-irrelevant branches, that makes sense.