Apparently, you achieve 🚨state-of-the-art🚨 model merging results! 🔥
✨ Introducing “No Task Left Behind: Isotropic Model Merging with Common and Task-Specific Subspaces”
Apparently, you achieve 🚨state-of-the-art🚨 model merging results! 🔥
✨ Introducing “No Task Left Behind: Isotropic Model Merging with Common and Task-Specific Subspaces”
IARs — like the #NeurIPS2024 Best Paper — now lead in AI image generation. But at what risk?
IARs:
🔍 Are more likely than DMs to reveal training data
🖼️ Leak entire training images verbatim
🧵 1/
IARs — like the #NeurIPS2024 Best Paper — now lead in AI image generation. But at what risk?
IARs:
🔍 Are more likely than DMs to reveal training data
🖼️ Leak entire training images verbatim
🧵 1/
@bcywinski.bsky.social, @kdeja.bsky.social
tl;dr: use features learned by sparse autoencoders (SAEs) to remove unwanted concepts in text-to-image diffusion models
arxiv.org/abs/2501.18052
@bcywinski.bsky.social, @kdeja.bsky.social
tl;dr: use features learned by sparse autoencoders (SAEs) to remove unwanted concepts in text-to-image diffusion models
arxiv.org/abs/2501.18052