Jing Yang
jingyng.bsky.social
Jing Yang
@jingyng.bsky.social
Post-doc researcher at BIFOLD and the XplaiNLP group from Quality and Usability lab at TU Berlin. Interested in: xAI, fact-checking, synthetic data generation and evaluation
Reposted by Jing Yang
Can LLMs generate explanations for datasets without such annotations? 🧠
We tested model explanations across 19 datasets (NLI, fact-checking, hallucination detection) to see how well they self-rationalize on completely unseen data.
#LLMs #Explainability #ACL2025 #TACL
July 25, 2025 at 12:53 PM