the fancy ML i see these days is usually fine-tuning rather than pre-training models - which captures a lot of priors that usually make meh data pretty workable
the fancy ML i see these days is usually fine-tuning rather than pre-training models - which captures a lot of priors that usually make meh data pretty workable
I might be biased because I’ve never been bit too bad by the catastrophic forgetting bug
I might be biased because I’ve never been bit too bad by the catastrophic forgetting bug