Google DeepMind
Paris
2) tout a fait d'accord ! Je ne justifiais pas la tâche kafkaïenne. Mon approche est plus de chercher un moyen de contourner que de protester, en général
2) tout a fait d'accord ! Je ne justifiais pas la tâche kafkaïenne. Mon approche est plus de chercher un moyen de contourner que de protester, en général
All the emails I saw as an SAC were "help your ACs write new meta-reviews".
All the emails I saw as an SAC were "help your ACs write new meta-reviews".
While I agree that general chairs, program chairs, local chairs etc do a lot of work (seeing a glimpse of that sometimes myself with ALT/AISTATS), once you start having a bit of money, it can get easier with using conference organising services
While I agree that general chairs, program chairs, local chairs etc do a lot of work (seeing a glimpse of that sometimes myself with ALT/AISTATS), once you start having a bit of money, it can get easier with using conference organising services
But as a NeurIPS SAC I must say that the last definition should not exist - ACs are supposed to change the decisions and update their meta-review themselves.
But as a NeurIPS SAC I must say that the last definition should not exist - ACs are supposed to change the decisions and update their meta-review themselves.
arxiv.org/abs/2402.05468
arxiv.org/abs/2402.05468
Our end-to-end method captures a regression and a classification objective, as well as the autoencoder loss.
We see it as "building a bridge" between these different problems.
8/8
Our end-to-end method captures a regression and a classification objective, as well as the autoencoder loss.
We see it as "building a bridge" between these different problems.
8/8
We compare different training methods, showing up to 25% improvement on the least-squares baseline error for our full end-to-end method, over 8 datasets.
7/8
We compare different training methods, showing up to 25% improvement on the least-squares baseline error for our full end-to-end method, over 8 datasets.
7/8
At inference time, you use the same decoder μ to perform your prediction!
6/8
At inference time, you use the same decoder μ to perform your prediction!
6/8
e.g. use softmax(dist) to k different centers
The encoder and the associated decoder μ (in blue) can be trained on an autoencoder loss
5/8
e.g. use softmax(dist) to k different centers
The encoder and the associated decoder μ (in blue) can be trained on an autoencoder loss
5/8
- Soft-binning: encode the target as a probability, not just a one-hot.
- Learnt target encoders: Instead of designing this transformation by hand, learn it from data.
- Train everything jointly!
4/8
- Soft-binning: encode the target as a probability, not just a one-hot.
- Learnt target encoders: Instead of designing this transformation by hand, learn it from data.
- Train everything jointly!
4/8
It seems strange, but it's been shown to work well in many settings, even for RL applications.
3/8
It seems strange, but it's been shown to work well in many settings, even for RL applications.
3/8
In many tasks with a continuous target (price, rating, pitch..), instead of training on a regression objective with least-squares [which seems super natural!] - people have been instead training their models using classification!
2/8
In many tasks with a continuous target (price, rating, pitch..), instead of training on a regression objective with least-squares [which seems super natural!] - people have been instead training their models using classification!
2/8
Link: arxiv.org/abs/2502.02996
1/8
Link: arxiv.org/abs/2502.02996
1/8