Jeongkyu Shin
banner
inureyes.bsky.social
Jeongkyu Shin
@inureyes.bsky.social
CEO / AI researcher @ Lablup Inc.
여기도 이제 사람 생기나…
November 27, 2024 at 3:40 AM
Reposted by Jeongkyu Shin
A cool new paper about detecting "hallucinations" (or "bullshitting") of LLMs.

The idea is simple. Cluster potential answers based on the entailment, and then the entropy of the potential answers tells us how "unsure" it is, which is linked to confabulation.

www.nature.com/articles/s41...
Detecting hallucinations in large language models using semantic entropy - Nature
Hallucinations (confabulations) in large language model systems can be tackled by measuring uncertainty about the meanings of generated responses rather than the text itself to improve question-a...
www.nature.com
June 20, 2024 at 3:06 PM
print(“Hello bluesky social!”)
May 10, 2023 at 4:26 AM