2. Confounders can reappear later in other “tasks”
We did observe that in disjoint scenarios, solutions like “blue + large + metallic” (concatenation of confounders) happen.
Did you have something like this in mind?
2. Confounders can reappear later in other “tasks”
We did observe that in disjoint scenarios, solutions like “blue + large + metallic” (concatenation of confounders) happen.
Did you have something like this in mind?
In the presence of confounders, we show that models succeed when training on all data, but sequential training on accumulated tasks fails without further action!
This questions the common CL "upper-bound" with infinite memory
In the presence of confounders, we show that models succeed when training on all data, but sequential training on accumulated tasks fails without further action!
This questions the common CL "upper-bound" with infinite memory