Submit your interactive “explainables” and “explorables” that visualize, interpret, and explain AI. #IEEEVIS
📆 Deadline: July 30, 2025
visxai.io
If you are excited about interpretability and human-AI alignment — let’s chat!
And come see Abstraction Alignment ⬇️ in the Explainable AI paper session on Monday at 4:20 JST
Models can learn the right concepts but still be wrong in how they relate them.
✨Abstraction Alignment✨evaluates whether models learn human-aligned conceptual relationships.
It reveals misalignments in LLMs💬 and medical datasets🏥.
🔗 arxiv.org/abs/2407.12543
If you are excited about interpretability and human-AI alignment — let’s chat!
And come see Abstraction Alignment ⬇️ in the Explainable AI paper session on Monday at 4:20 JST
Models can learn the right concepts but still be wrong in how they relate them.
✨Abstraction Alignment✨evaluates whether models learn human-aligned conceptual relationships.
It reveals misalignments in LLMs💬 and medical datasets🏥.
🔗 arxiv.org/abs/2407.12543
Models can learn the right concepts but still be wrong in how they relate them.
✨Abstraction Alignment✨evaluates whether models learn human-aligned conceptual relationships.
It reveals misalignments in LLMs💬 and medical datasets🏥.
🔗 arxiv.org/abs/2407.12543