Lingjun Zhao
lingjunz.bsky.social
Lingjun Zhao
@lingjunz.bsky.social
NLP PhD student @UMD. Study how to make visual-language models more trustworthy and useful for humans. Website: http://lingjunzhao.github.io
🚨 New #EMNLP2025 (main) paper!
LLMs often produce inconsistent explanations (62–86%), hurting faithfulness and trust in explainable AI.
We introduce PEX consistency, a measure for explanation consistency,
and show that optimizing it via DPO improves faithfulness by up to 9.7%.
October 14, 2025 at 4:28 PM
We introduce a super simple yet effective strategy to improve video-language alignment (+18%): add hallucination correction in your training objective👌
Excited to share our accepted paper at ACL: Can Hallucination Correction Improve Video-language Alignment?
Link: arxiv.org/abs/2502.15079
May 20, 2025 at 9:12 PM