And, maybe most importantly, AI should be capable of saying "I don't know" and elucidating what it's missing, rather than hallucinating.
And, maybe most importantly, AI should be capable of saying "I don't know" and elucidating what it's missing, rather than hallucinating.
If AI makes decisions, the reasons should be readily apparent. There should be recourse when it makes mistakes.
If AI provides information, the sources should be known and verifiable. Errors should be correctable.
If AI makes decisions, the reasons should be readily apparent. There should be recourse when it makes mistakes.
If AI provides information, the sources should be known and verifiable. Errors should be correctable.