Angie Boggust
angieboggust.bsky.social
Angie Boggust
@angieboggust.bsky.social
MIT PhD candidate in the VIS group working on interpretability and human-AI alignment
Reposted by Angie Boggust
#VISxAI IS BACK!! 🤖📊

Submit your interactive “explainables” and “explorables” that visualize, interpret, and explain AI. #IEEEVIS

📆 Deadline: July 30, 2025

visxai.io
Workshop on Visualization for AI Explainability
The role of visualization in artificial intelligence (AI) gained significant attention in recent years. With the growing complexity of AI models, the critical need for understanding their inner-workin...
visxai.io
May 7, 2025 at 9:56 PM
I’ll be at #CHI2025 🌸

If you are excited about interpretability and human-AI alignment — let’s chat!

And come see Abstraction Alignment ⬇️ in the Explainable AI paper session on Monday at 4:20 JST
#CHI2025 paper on human–AI alignment!🧵

Models can learn the right concepts but still be wrong in how they relate them.

✨Abstraction Alignment✨evaluates whether models learn human-aligned conceptual relationships.

It reveals misalignments in LLMs💬 and medical datasets🏥.

🔗 arxiv.org/abs/2407.12543
April 24, 2025 at 1:05 PM
#CHI2025 paper on human–AI alignment!🧵

Models can learn the right concepts but still be wrong in how they relate them.

✨Abstraction Alignment✨evaluates whether models learn human-aligned conceptual relationships.

It reveals misalignments in LLMs💬 and medical datasets🏥.

🔗 arxiv.org/abs/2407.12543
April 14, 2025 at 3:48 PM