It's *structurally indifferent to truth*
"Hallucinations" are not the result of "flaws," they are literally inherent in & inextricable from what LLM systems do & are.
Whether an "AI" tells you something that matches reality or something that doesn't, *it is working as designed*
It's *structurally indifferent to truth*