Marco Fumero
banner
marcofm.bsky.social
Marco Fumero
@marcofm.bsky.social
PostDoc @ISTAustria 🧑🏻‍💻 | Organizer of @unireps.bsky.social | Member @ellis.eu | Prev. PhD @SapienzaRoma @ELLISforEurope | @amazon AWS AI | @autodesk AI Lab | (he/him)
The latent vector field can be used to analyze the training dynamics of neural networks, giving insights on how representations form during the training process.
June 4, 2025 at 5:26 PM
Attractors can be recovered from noise, enabling us to probe the knowledge encoded in the weights of vision foundation models, without requiring any input data.
June 4, 2025 at 5:26 PM
Inductive biases in the training process cause the formations of attractors in the latent vector field, characterizing memorization and generalization regimes of the network.
June 4, 2025 at 5:26 PM
Neural networks implicitly define a latent vector field on the data manifold, via autoencoding iterations🌀

This representation retains properties of the model, revealing memorization and generalization regimes, and characterizing distribution shifts

📜: arxiv.org/abs/2505.22785
June 4, 2025 at 5:26 PM
Attractors can be recovered from noise, enabling us to probe the knowledge encoded in the weights of vision foundation models, without requiring any input data.
June 4, 2025 at 5:20 PM
Inductive biases in the training process cause the formations of attractors in the latent vector field, characterizing memorization and generalization regimes of the network.
June 4, 2025 at 5:20 PM
LFMs enable to stitch together neural network modules, allowing sample efficient information transfer with minimal to no prior correspondence. Additionally, LFMs can be applied on top of any off-the-shelf parametric transformation method.

(4/N)
December 5, 2024 at 6:09 PM
LFMs find correspondences from very few prior knowledge: nearly isometric spaces can be aligned reducing the number of known correspondences by a factor of 100 (from 300 to 3 in the example below).

(3/N)
December 5, 2024 at 6:09 PM
LFMs allows to (i) robustly compare latent representations, and (ii) to identify local distortions induced by the map, providing an interpretable measure of representational similarity.

(2/N)
December 5, 2024 at 6:09 PM
Excited to present "Latent Functional Maps" at #NeurIPS !

We show how neural models can be aligned by matching function spaces on representation manifolds, providing a unified framework for model comparison, matching, and information transfer.

📜: arxiv.org/abs/2406.14183

👇🧵
December 5, 2024 at 6:09 PM