🔹 Theoretical analysis on why GCON works.
🔹 Ablations on efficiency and generalizability — GCON is surprisingly transferable 🚀
Check out the paper for more, or even better, catch us at our @logconference.bsky.social oral at 14:30 EST/19:30 GMT! 🧵[10/10]
🔹 Theoretical analysis on why GCON works.
🔹 Ablations on efficiency and generalizability — GCON is surprisingly transferable 🚀
Check out the paper for more, or even better, catch us at our @logconference.bsky.social oral at 14:30 EST/19:30 GMT! 🧵[10/10]
🔹 GCON beats other GNNs & GFlowNet-based solvers
🔹 Outperforms (time-budgeted) Gurobi optimizer on Max Cut by 45+ edges!
🔹 Much faster inference than GFlowNet & Gurobi
GCON is both versatile & powerful. 🧵[9/n]
🔹 GCON beats other GNNs & GFlowNet-based solvers
🔹 Outperforms (time-budgeted) Gurobi optimizer on Max Cut by 45+ edges!
🔹 Much faster inference than GFlowNet & Gurobi
GCON is both versatile & powerful. 🧵[9/n]
p is then used for:
(a) Self-supervised loss computation
(b) Task-specific decoding to satisfy task constraints
🧵[8/n]
p is then used for:
(a) Self-supervised loss computation
(b) Task-specific decoding to satisfy task constraints
🧵[8/n]
[L]: High-frequency features capture the true clique.
[R]: Low-pass filters diffuse boundaries to nodes not part of the clique. 🧵[7/n]
[L]: High-frequency features capture the true clique.
[R]: Low-pass filters diffuse boundaries to nodes not part of the clique. 🧵[7/n]
We then apply GCON blocks with multi-scale filters derived from geometric scattering alongside conventional GNN aggregation for low-pass filtering
Why the multi-scale filters? 🧵[6/n]
We then apply GCON blocks with multi-scale filters derived from geometric scattering alongside conventional GNN aggregation for low-pass filtering
Why the multi-scale filters? 🧵[6/n]
1️⃣ Hybrid filter bank: Combines GNN aggregation with wavelet filters to capture intricate graph geometry.
2️⃣ Localized attention: Dynamically weights filters per node for flexibility.
3️⃣ Self-supervised losses: Circumvent the need for labels 🧵[5/n]
1️⃣ Hybrid filter bank: Combines GNN aggregation with wavelet filters to capture intricate graph geometry.
2️⃣ Localized attention: Dynamically weights filters per node for flexibility.
3️⃣ Self-supervised losses: Circumvent the need for labels 🧵[5/n]
2️⃣ CO graphs usually lack informative node features.
3️⃣ NP-hardness limits labeled data availability, making supervised learning infeasible. 🧵[4/n]
2️⃣ CO graphs usually lack informative node features.
3️⃣ NP-hardness limits labeled data availability, making supervised learning infeasible. 🧵[4/n]