Olivier Salvado
banner
oliviersalvado.bsky.social
Olivier Salvado
@oliviersalvado.bsky.social
Prof of AI at QUT, Australia
Interest in Machine learning applied to health.
#selfsupervision #machinelearning #unsupervisedlearning #VAE #autoencoder
Yes. Every centred distribution in high dimension ends up being a hypersphere, most with thicker skin than a multi variate gaussian. That includes the latent of autoencoder and even images (more easily seen if normalised), assuming one wants to get some centred latent even for reg AE.
August 31, 2025 at 11:55 PM
I know, but in high dimension the latent (and data) is on a hypersphere and drawing a hyperplan tangent to a 2D smooth manifold is not what happens. I don't know how to represent it, or imagine it, but this simplification is misleading I reckon without proper warning.
August 31, 2025 at 7:46 PM
The bottom figure makes sense only if the data are 3D on a 2D manifold, and that the AE latent is 2D. In which case an AE is overkill (might as well use regression). AE works in high dimension (data and latent), in which case the bottom figure is wrong, or at least very misleading.
August 31, 2025 at 4:23 AM
Finally, great to see chatGPT produce something useful.
April 28, 2025 at 9:30 AM
Great post
April 15, 2025 at 1:02 PM
I agree. The definition I like is “ability to solve a problem never seen before”
December 18, 2024 at 8:33 PM
The image version is actually useful because the zooming on the gif does not work well.
December 4, 2024 at 7:37 PM