Matthew Kowal
matthewkowal.bsky.social
Matthew Kowal
@matthewkowal.bsky.social
PhD @ York University / Research Intern @ Ubisoft LaForge / Technical Lead @ VectorInst/ Previously @ Toyota Research Institute and @ NextAI

Interpretability and Computer Vision
Reposted by Matthew Kowal
Our method reveals model-specific features too: DinoV2 (left) shows specialized geometric concepts (depth, perspective), while SigLIP (right) captures unique text-aware visual concepts.

This opens new paths for understanding model differences!

(7/9)
February 7, 2025 at 3:15 PM
Reposted by Matthew Kowal
Discover how our new mechanistic interpretability work uncovers universal concepts. Check it out on arXiv!
February 7, 2025 at 5:51 PM
Reposted by Matthew Kowal
🌌🛰️🔭Wanna know which features are universal vs unique in your models and how to find them? Excited to share our preprint: "Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment"!

arxiv.org/abs/2502.03714

(1/9)
February 7, 2025 at 3:15 PM