#Equivariance
“Dear mathematician, can you explain to me what mass is?”

“Very simple, dear: the mass of a dynamical system is the cohomology class of the Galilean group representing lack of equivariance of the moment map on the symplectic manifold that is the phase space of the system.”
November 10, 2025 at 1:40 PM
But these last few days, two of my posts on Twitter (x.com/gro_tsen/sta... and x.com/gro_tsen/sta...) went strangely viral.

Not ✱hugely✱ viral, but ✵annoyingly✵ viral. The sort of viral where you get lots of uninteresting replies or quote-posts by idiots, enough to drown all other notifications.
November 11, 2025 at 10:36 AM
Want to read more about HIP? Do you have any burning questions, like:

Can we predict Hessians with correct equivariance?
Can we do this with any MLIP, like Mace or Uma?
Do we even need Hessians?

Head over to The Matter Blotter to find out now!

aspuru.substack.com/p/hip-hessia...

(Spoiler: Yes)
November 4, 2025 at 9:47 PM
Given a vision architecture, we create a flopping-equivariant version by block-diagonalizing all linear layers (and 1x1 convs) and minimally modifying other layers to ensure equivariance. For instance, for pointwise non-linearities σ, we transform the features to the spatial domain and back.
February 10, 2025 at 7:35 AM
Zongzhao Li, Jiacheng Cen, Bing Su, Wenbing Huang, Tingyang Xu, Yu Rong, Deli Zhao: Large Language-Geometry Model: When LLM meets Equivariance https://arxiv.org/abs/2502.11149 https://arxiv.org/pdf/2502.11149 https://arxiv.org/html/2502.11149
February 18, 2025 at 10:32 AM
For some machine-learning magic, check out this work with many collaborators, where we reassess (and fix) all the equivariance constraints in predicting electronic density responses, arriving at a super efficient + accurate GPR framework! arxiv.org/abs/2501.11019
February 12, 2025 at 8:57 AM
3/ PooDLe addresses these challenges by unifying a dense, flow equivariance objective over global crops and a view invariance objective over smaller subcrops that serve as pseudo-iconic views. Crops are sampled from pairs of video frames, with motion as a natural augmentation.
April 20, 2025 at 8:31 PM
Zachary Schlamowitz, Andrew Bennecke, Daniel J. Tward: Moment kernels: a simple and scalable approach for equivariance to rotations and reflections in deep convolutional networks https://arxiv.org/abs/2505.21736 https://arxiv.org/pdf/2505.21736 https://arxiv.org/html/2505.21736
May 29, 2025 at 6:01 AM
training protocol, e.g., with a specific loss and data augmentations (soft equivariance), or (2) to ignore equivariance and infer it only implicitly. However, both options have limitations: soft equivariance requires a priori knowledge about relevant [2/5 of https://arxiv.org/abs/2506.03914v1]
June 5, 2025 at 6:09 AM
This jokey set of posts made some people who worked on equivariance kind of upset with me (eventhough I myself have around 10 papers on the topic, but more on the methodological side, not the science side). :))
May 19, 2025 at 11:34 PM
higher proportion of equivariance compared to other methods. Lastly, we demonstrate that GENEOnet is on average robust to perturbations arising from molecular dynamics. These results collectively serve as proof of the explainability, trustworthiness, [4/5 of https://arxiv.org/abs/2503.09199v1]
March 13, 2025 at 6:02 AM
Md Fahim Anjum
Advancing Diffusion Models: Alias-Free Resampling and Enhanced Rotational Equivariance
https://arxiv.org/abs/2411.09174
November 15, 2024 at 7:01 AM
William Cook: Beyond Coordinates: Meta-Equivariance in Statistical Inference https://arxiv.org/abs/2504.10667 https://arxiv.org/pdf/2504.10667 https://arxiv.org/html/2504.10667
April 16, 2025 at 6:06 AM
arXiv:2505.03176v1 Announce Type: new
Abstract: Current self-supervised algorithms mostly rely on transformations such as data augmentation and masking to learn visual representations. This is achieved by inducing invariance or equivariance with [1/8 of https://arxiv.org/abs/2505.03176v1]
May 7, 2025 at 5:58 AM
We argue that downstream task evaluation cannot easily uncover these behaviors, and that equivariance, invariance, and disentanglement are critical components that enable a variety of real-world applications like retrieval, generation, and style transfer.
May 13, 2025 at 2:15 PM
Chanho Lee, Jinsu Son, Hyounguk Shon, Yunho Jeon, Junmo Kim
FRED: Towards a Full Rotation-Equivariance in Aerial Image Object Detection. (arXiv:2401.06159v1 [cs.CV])
http://arxiv.org/abs/2401.06159
January 15, 2024 at 3:02 AM
due to the mathematical complexity of methods used to exploit these symmetries, which often rely on representation theory, a bespoke concept in differential geometry and group theory. In this work, we show that the same equivariance can be achieved [3/6 of https://arxiv.org/abs/2505.21736v1]
May 29, 2025 at 6:01 AM
Daowei Wang, Mian Wu, Haojin Zhou
The Equivariance Criterion in a Linear Model for Fixed-X Cases
https://arxiv.org/abs/2204.10488
November 5, 2024 at 6:00 AM
Fascinating conversation! My knowledge of neuroscience is limited, but this discussion reminds me of a big curiosity of mine: learning symmetries (e.g. GDL).

Could the result on privileged axes help us understand whether symmetries are architectural (inbuilt equivariance?) or learned / distributed?
August 30, 2024 at 10:30 AM
Our theory is tailored to models that have strong locality biases, such as CNNs. However, we find that our theory (bottom rows) is still moderately predictive for a simple diffusion model *with* self-Attention layers (top rows), which explicitly break equivariance/locality.
December 31, 2024 at 4:00 PM
We argue that hard coding equivariance should be considered a way to make the network *faster*. Indeed, parametrizing equivariant linear layers in the Fourier domain, turns them block-diagonal. For us, Fourier means having features that are invariant and features that change sign under flopping.
February 10, 2025 at 7:35 AM
Check out the preprint on the arxiv! arxiv.org/abs/2502.05169 Comments and discussion welcome. Also, check out the equivariance-scaling work by Brehmer et al., where they studied total FLOPs over training in a rigid-body interaction task, which partially inspired this work. bsky.app/profile/joha...
You might not be surprised to hear that equivariance improves data efficiency.

But did you expect equivariant models to also be more *compute*-efficient? Learning symmetries from data costs FLOPs!

arxiv.org/abs/2410.23179
With Sönke Behrends, @pimdh.bsky.social, and @taco-cohen.bsky.social.

5/6
February 10, 2025 at 7:35 AM