6/6
6/6
5/6
5/6
4/6
4/6
(1) use a small network that equivariantly predicts local frames, and express inputs in these local frames.
(2) add frame-to-frame transformations in the message passing (or attention) of your backbone architecture.
3/6
(1) use a small network that equivariantly predicts local frames, and express inputs in these local frames.
(2) add frame-to-frame transformations in the message passing (or attention) of your backbone architecture.
3/6
2/6
2/6
7/7
7/7
Appendix A is on a novel way to amplify likelihood training with classifier reweighting, aka DiscFormer. To avoid a classifier unweighting step after training, we reweight training data to increase the difference between model and data, aka DiscFormation.
6/7
Appendix A is on a novel way to amplify likelihood training with classifier reweighting, aka DiscFormer. To avoid a classifier unweighting step after training, we reweight training data to increase the difference between model and data, aka DiscFormation.
6/7
5/7
5/7
4/7
4/7
3/7
3/7
2/7
2/7
They built L-GATr 🐊: a transformer that's equivariant to the Lorentz symmetry of special relativity. It performs remarkably well across different tasks in high-energy physics.
2/6
They built L-GATr 🐊: a transformer that's equivariant to the Lorentz symmetry of special relativity. It performs remarkably well across different tasks in high-energy physics.
2/6
Looking forward to exciting discussions at NeurIPS!
Looking forward to exciting discussions at NeurIPS!
6/7
6/7
5/7
5/7
4/7
4/7
3/7
3/7
2/7
2/7