Jacob Bamberger
jacobbam.bsky.social
Jacob Bamberger
@jacobbam.bsky.social
ML PhD student at University of Oxford. Interested in Geometric Deep Learning
Presenting at poster session 4 east.
📅Wednesday, July 16th
🕓4:30-7:00 PM
📈#E-2802
July 13, 2025 at 2:30 PM
🔑 Takeaways:
✅ Long-range can be formalized & measured
✅ Reveals new insights into models & datasets
🚀 Time to rethink evaluation: not just accuracy, but how models solve tasks
July 13, 2025 at 2:30 PM
Why does this matter?

"Long-range" is often just a dataset intuition or model label.

We offer a measurable way to:
💡Understand models
🧪Test benchmarks
🦮Guide model design
🚀Go beyond performance gaps
July 13, 2025 at 2:30 PM
We reassess LRGB, the go-to long-range benchmark, by checking if model range correlates with performance—expected for truly long-range tasks.

Surprisingly:
❌ Peptides-func: negative correlation, suggests not long-range
✅ VOC: positive correlation, suggests long-range
July 13, 2025 at 2:30 PM
We validate our framework in three steps:

👷Construct synthetic tasks with analytically-known range
💯Show trained GNNs can approximate the true task range
🔬Use range as a proxy to evaluate real benchmarks
July 13, 2025 at 2:30 PM
Our measure uses the model's Jacobian (for node tasks) and Hessian (for graph tasks) to quantify input-output influence, works with any distance metric, and supports analysis at all granularities—node, graph, and dataset.
July 13, 2025 at 2:30 PM
We propose a formal range measure for any graph operator, derived from natural axioms (like locality, additivity, homogeneity) — and show it’s the unique measure satisfying these.

This measure applies to both node- and graph-level tasks, and across architectures.
July 13, 2025 at 2:30 PM
"Long-range tasks" are a central yet vague challenge in graph learning.

What makes a task long-range? How can we tell if a model actually captures long-range interactions?
July 13, 2025 at 2:30 PM