Martin Mundt
martinmundt.bsky.social
Martin Mundt
@martinmundt.bsky.social
Professor for Lifelong Machine Learning @ Uni Bremen | OWL-ML Lab: https://owl-ml.com | Board @ ContinualAI | QueerInAI | CoLLAs 2026 Program Chair | He/him 🏳️‍🌈🇪🇺
We have 2 variants of the dataset: 1. Confounders only ever appear disjointly
2. Confounders can reappear later in other “tasks”

We did observe that in disjoint scenarios, solutions like “blue + large + metallic” (concatenation of confounders) happen.

Did you have something like this in mind?
May 4, 2025 at 7:09 AM
We believe ConCon is crucial for future continual learning studies. Why?

In the presence of confounders, we show that models succeed when training on all data, but sequential training on accumulated tasks fails without further action!

This questions the common CL "upper-bound" with infinite memory
May 2, 2025 at 9:48 AM