Bastian Grossenbacher-Rieck
banner
pseudomanifold.topology.rocks
Bastian Grossenbacher-Rieck
@pseudomanifold.topology.rocks
Dad · Geometry ∩ Topology ∩ Machine Learning
Professor at University of Fribourg

While geometry & topology may not save the world, they may well save something that is homotopy-equivalent to it.

🏠 https://bastian.rieck.me/
🏫 https://aidos.group
This is an exciting collaboration with
@limbeckkat.bsky.social, Lydia Mezrag, and Guy Wolf, supported by @tum.de, @helmholtzmunich.bsky.social, @mila-quebec.bsky.social, @umontreal.ca, and @unifr.bsky.social.

🖖

🧵6/6
November 11, 2025 at 3:48 PM
Want to learn more?

🌟Check out our paper, code, blog post, and video!🌟

📜 Paper: arxiv.org/abs/2506.11700
🖥️ Code: github.com/aidos-lab/ma...
📄 Blog: aidos.group/blog/magedge/
📽️ Video: youtu.be/uQts_HR1uSA

🧵5/6
Geometry-Aware Edge Pooling for Graph Neural Networks
YouTube video by Bastian Grossenbacher-Rieck
youtu.be
November 11, 2025 at 3:48 PM
And it just works!

Our pooling methods perform well across tasks and…

🏆 …reach top classification and regression performance.
🔥 …retain this robust performance across pooling ratios.
✨ …preserve graph structure and spectral properties

🧵4/n
November 11, 2025 at 3:48 PM
But…how?

🔍 We contract the most redundant edges that are least relevant for the graph’s structural diversity as measured by the magnitude or spread of a graph.

🧵3/n
November 11, 2025 at 3:48 PM
Why do we need structure-aware pooling, anyway?

🔮Our methods, MagEdgePool and SpreadEdgePool, faithfully preserve the original graphs’ geometry.

Alternative pooling layers destroy graph structure to varying extents.

🧵2/n
November 11, 2025 at 3:48 PM
November 3, 2025 at 3:32 PM
This is great because LID is comparatively cheap to monitor, as well as fully agnostic with respect to datasets and models. What's not to love?

📜: arxiv.org/abs/2506.01034
💻: github.com/aidos-lab/To...

🧵5/n
Less is More: Local Intrinsic Dimensions of Contextual Language Models
Understanding the internal mechanisms of large language models (LLMs) remains a challenging and complex endeavor. Even fundamental questions, such as how fine-tuning affects model behavior, often requ...
arxiv.org
November 3, 2025 at 3:32 PM
We also consider other scenarios like grokking or the exhaustion of training capabilities (for dialog state tracking) and find that monitoring LID is always a helpful indicator of what's going on.

🧵4/n
November 3, 2025 at 3:32 PM
As a rule of thumb, we observe that when the mean LID drops and stabilizes, performance is typically improving. By contrast, when the mean LID drops and rises again, this is a warning for overfitting.

🧵3/n
November 3, 2025 at 3:32 PM
Our core idea is to calculate LID on contextual token embeddings. We then study how shifts in mean LID change over training, giving us a way to summarize learnt geometry without requiring any labels.

🧵2/n
November 3, 2025 at 3:32 PM
Excellent work with Ernst Röell, supported by @tum.de, @helmholtzmunich.bsky.social, and @unifr.bsky.social.

Thanks to the AC and the reviewers for helpful comments. The paper benefited a lot from this and we included a "changelog" to show what we did!

5/5
October 29, 2025 at 8:19 AM
It's amazingly fast (< 𝟏𝐡 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐟𝐮𝐥𝐥 𝐩𝐢𝐩𝐞𝐥𝐢𝐧𝐞, 𝐭𝐫𝐚𝐢𝐧𝐞𝐝 𝐨𝐧 𝐚 𝐠𝐚𝐦𝐢𝐧𝐠 𝐥𝐚𝐩𝐭𝐨𝐩) and high-quality.

Another Topological Deep Learning success story, coming soon to #NeurIPS2025!

🖥️ github.com/aidos-lab/in...
📜 arxiv.org/pdf/2410.18987

4/n
October 29, 2025 at 8:19 AM
What we gain is a 𝐬𝐭𝐚𝐛𝐥𝐞 and 𝐞𝐱𝐩𝐫𝐞𝐬𝐬𝐢𝐯𝐞 latent space that permits simple interpolation.

3/n
October 29, 2025 at 8:19 AM
Our 𝕀𝕟𝕟𝕖𝕣 ℙ𝕣𝕠𝕕𝕦𝕔𝕥 𝕋𝕣𝕒𝕟𝕤𝕗𝕠𝕣𝕞 turns 𝐬𝐞𝐭𝐬 into 𝐢𝐦𝐚𝐠𝐞𝐬 by evaluating a dataset from different directions and counting things. The cool thing: We can learn to 𝐫𝐞𝐯𝐞𝐫𝐬𝐞 the process using simple MLPs!

2/n
October 29, 2025 at 8:19 AM
You publish papers that can be criticized. I submit to arXiv under an anonymous handle. We are not the same.

(/s)
September 30, 2025 at 7:27 AM