Minimizing entropy only to realize my level of surprise increased
gh.io/pf
We investigate various applications:
- extending the PCA algorithm to non-linear decorrelation
- learning minimally redundant representations for SSL
- learning features that generalize beyond label supervision in supervised learning
We investigate various applications:
- extending the PCA algorithm to non-linear decorrelation
- learning minimally redundant representations for SSL
- learning features that generalize beyond label supervision in supervised learning
Our method employs an adversarial game where small networks identify dependencies among feature dimensions, while the main network exploits this information to reduce dependencies.
Our method employs an adversarial game where small networks identify dependencies among feature dimensions, while the main network exploits this information to reduce dependencies.
Currently, most ML techniques rely on minimizing the covariance between output feature dimensions to extract minimally redundant representations.
Still, this is not sufficient as linearly uncorrelated variables can still exhibit nonlinear relationships.
Currently, most ML techniques rely on minimizing the covariance between output feature dimensions to extract minimally redundant representations.
Still, this is not sufficient as linearly uncorrelated variables can still exhibit nonlinear relationships.
Our paper explains why and introduces an algorithm for general dependence minimization.
🧵
Our paper explains why and introduces an algorithm for general dependence minimization.
🧵
The authors rephrased our method, but their approach is not different from ours.
Surprisingly, they cited us for general observations but did everything they could to hide our contributions from the readers/reviewers.
The authors rephrased our method, but their approach is not different from ours.
Surprisingly, they cited us for general observations but did everything they could to hide our contributions from the readers/reviewers.