lucazapp.bsky.social
@lucazapp.bsky.social
Check out our new work on conditioning pre-trained generative models via activation steering. LineAS has been accepted at NeurIPS 2025. Code and paper are online:

💻 github.com/apple/ml-lin...
📄 arxiv.org/abs/2503.10679
October 23, 2025 at 9:16 AM
New blog post that explains our work on Controlling Diffusion and LLMs using steering and optimal transport:
machinelearning.apple.com/research/tra...

This work will be presented at ICLR2025 in Singapore. See you there!
April 17, 2025 at 3:49 PM
Reposted
Thrilled to share the latest work from our team at
@Apple
where we achieve interpretable and fine-grained control of LLMs and Diffusion models via Activation Transport 🔥

📄 arxiv.org/abs/2410.23054
🛠️ github.com/apple/ml-act

0/9 🧵
December 10, 2024 at 1:09 PM
Reposted
Paper🧵 (cross-posted at X): When does composition of diffusion models "work"? Intuitively, the reason dog+hat works and dog+horse doesn’t has something to do with independence between the concepts being composed. The tricky part is to formalize exactly what this means. 1/
February 11, 2025 at 5:59 AM
Reposted
🚨 One question that has always intrigued me is the role of different ways to increase a model's capacity: parameters, parallelizable compute, or sequential compute?

We explored this through the lens of MoEs:
January 28, 2025 at 6:26 AM
Reposted
The Apple Machine Learning Research (MLR) team in Paris has openings for both FTE roles and a short-term post-doc position to contribute to our team's research agenda. Researchers at Apple's MLR (led by Samy Bengio) target impactful publications in top-tier ML venues and OSS.
December 18, 2024 at 5:05 PM