Pierre-François DP
pfdeplaen.bsky.social
Pierre-François DP
@pfdeplaen.bsky.social
Final-year PhD student in computer vision at KU Leuven, Belgium.
Minimizing entropy only to realize my level of surprise increased
gh.io/pf
It is unfortunately not even discussed so far... I'm in favour of the motion !
March 11, 2025 at 9:37 AM
4/n
We investigate various applications:
- extending the PCA algorithm to non-linear decorrelation
- learning minimally redundant representations for SSL
- learning features that generalize beyond label supervision in supervised learning
February 21, 2025 at 11:19 AM
3/n
Our method employs an adversarial game where small networks identify dependencies among feature dimensions, while the main network exploits this information to reduce dependencies.
February 21, 2025 at 11:17 AM
2/n
Currently, most ML techniques rely on minimizing the covariance between output feature dimensions to extract minimally redundant representations.
Still, this is not sufficient as linearly uncorrelated variables can still exhibit nonlinear relationships.
February 21, 2025 at 11:16 AM
Did you know that a PCA decomposition or SSL decorrelation techniques (eg Barlow Twins) don't necessarily extract minimally redundant/dependent features?
Our paper explains why and introduces an algorithm for general dependence minimization.
🧵
February 21, 2025 at 11:14 AM
🚨 A peer-reviewed publication from MM'24 copied our CVPR 2023 paper! #plagiarism
The authors rephrased our method, but their approach is not different from ours.
Surprisingly, they cited us for general observations but did everything they could to hide our contributions from the readers/reviewers.
January 10, 2025 at 11:52 AM