Yash Shah
banner
ynshah.bsky.social
Yash Shah
@ynshah.bsky.social
PhD student at Stanford. Self-proclaimed computational neuroscientist and humanist. Incomplete bio at https://ynshah3.github.io/.
And of course because this is my first ever post I forgot to include hashtags! #ICML2025
July 13, 2025 at 9:14 PM
Check out the paper if interested and come talk to me during the poster session (July 17, Thursday at 4:30pm) if in Vancouver! icml.cc/virtual/2025.... [11/n]
ICML Poster Confounder-Free Continual Learning via Recursive Feature NormalizationICML 2025
icml.cc
July 13, 2025 at 9:09 PM
Finally, R-MDN, because it operates on the level of individual examples, can be integrated in both convolutional neural networks and vision transformers—which was one of the significant limitations of the MDN algorithm. [10/n]
July 13, 2025 at 9:08 PM
And R-MDN makes equitable predictions across population groups, such as across both boys and girls when performing sex classification on the ABCD (Casey et al., 2008) dataset in the presence of pubertal development scores as the confounder. [9/n]
July 13, 2025 at 9:08 PM
R-MDN can also remove the influence from multiple confounding variables, as seen when testing on the ADNI (Mueller et al., 2005) dataset. [8/n]
July 13, 2025 at 9:07 PM
Since R-MDN is a normalization layer, it can be tacked on to various already-proposed model architectures. [7/n]
July 13, 2025 at 9:07 PM
R-MDN effectively removes confounder influence from learned DNN features, as rigorously verified in both synthetically controlled environments and real-world datasets. [6/n]
July 13, 2025 at 9:07 PM
We propose Recursive Metadata Normalization (R-MDN), a normalization layer that leverages the statistical recursive least squares algorithm to iteratively update its internal parameters based on previously computed values whenever new data are received. [5/n]
July 13, 2025 at 9:06 PM
However, within continual learning, data becomes available sequentially, often over the span of several years, as in longitudinal studies. [4/n]
July 13, 2025 at 9:06 PM
Prior work such as BR-Net (Adeli et al., 2020), MDN (Lu et al., 2021), and P-MDN (Vento et al., 2022) proposed to learn confounder-invariant representations in DNNs work within a static learning setting and assume that the algorithm has access to all data at the outset of training. [3/n]
July 13, 2025 at 9:05 PM
Confounders are variables that influence both the outcome (i.e., the output) and the exposure (i.e., the input) in a study, causing spurious associations. [2/n]
July 13, 2025 at 9:05 PM