Mansi Sakarvadia
mansisakarvadia.bsky.social
Mansi Sakarvadia
@mansisakarvadia.bsky.social
UChicago CS PhD Student | Department of Energy Computational Science Graduate Fellow | https://mansisak.com/
✨ TL;DR: Making decentralized learning aware of network topology boosts performance & resilience—vital for federated learning, edge AI, and IoT. Check out the paper for technical details & results! (6/6) mansisak.com/topology_awa...
Topology-Aware Knowledge Propagation in Decentralized Learning
Topology-Aware Knowledge Propagation in Decentralized Learning
mansisak.com
June 19, 2025 at 11:19 PM
📈 Experiments show significant improvements: faster knowledge propagation compared to traditional, topology-agnostic approaches in various real-world network settings. (5/6)
June 19, 2025 at 11:19 PM
🧠 The system prioritizes information from well-connected or "influential" nodes, ensuring that high-quality knowledge quickly reaches more isolated or less-informed devices. No one gets left behind! (4/6)
June 19, 2025 at 11:19 PM
🔗 This work introduces topology-aware knowledge propagation, which tailors how models and information are shared based on each device’s place in the network, leading to more effective learning overall. (3/6)
June 19, 2025 at 11:19 PM
🌐 Decentralized learning often involves many devices (like edge or mobile) collaborating without a central server. However, the way these devices connect—the network topology—can heavily impact learning quality. (2/6)
June 19, 2025 at 11:19 PM
7/ 🙏 Special Thanks:
A huge shoutout to my incredible co-authors from multiple institutions for their contributions to this work:
Aswathy Ajith, Arham Khan, @nathaniel-hudson.bsky.social , @calebgeniesse.bsky.social, Yaoqing Yang, @kylechard.bsky.social , @ianfoster42.bsky.social, Michael Mahoney
March 4, 2025 at 6:15 PM
6/ 🌍 Scalable Impact:
Our methods aren’t just for small models! We show that they scale effectively to larger LMs, providing robust memorization mitigation without compromising performance across different sizes of models. Exciting progress for real-world applications!
March 4, 2025 at 6:15 PM
5/💡Best Approach:
Our proposed unlearning method, BalancedSubnet, outperforms others by effectively removing memorized info while maintaining high accuracy.
March 4, 2025 at 6:15 PM
4/🧪 Key Findings:
Unlearning-based methods are faster and more effective than regularization or fine-tuning in mitigating memorization.
March 4, 2025 at 6:15 PM
3/⚡Introducing TinyMem:
We created TinyMem, a suite of small, efficient models designed to help test and benchmark memorization mitigation techniques. TinyMem allows for quick experiments with lower computational costs.
March 4, 2025 at 6:15 PM
2/ 🚨 Main Methods:
We test 17 methods—regularization, fine-tuning, and unlearning—5 of which we propose. These methods aim to remove memorized info from LMs while preserving performance.
March 4, 2025 at 6:15 PM