It would have trouble with asking useful follow-up questions, but step 1 would be right.
It would have trouble with asking useful follow-up questions, but step 1 would be right.
It's possible to use that approach to identify some hallucinations. Not that either LLM "knows" anything like a "fact" for another to "agree" with.
You're multiplying two error rates together to make them smaller and let an educated human referee the disagreement.
It's possible to use that approach to identify some hallucinations. Not that either LLM "knows" anything like a "fact" for another to "agree" with.
You're multiplying two error rates together to make them smaller and let an educated human referee the disagreement.
www.reuters.com/business/med...
www.reuters.com/business/med...