Xingyue Huang
hxyscott.bsky.social
Xingyue Huang
@hxyscott.bsky.social
Machine Learning PhD in University of Oxford for Graph Representation Learning, Knowledge Graphs, Foundation Models, and Beyond
7/🧵

In conclusion, MOTIF’s ability to integrate arbitrary motifs elevates KGFMs, achieving superior performance in practice! Our rigorous theoretical expressiveness study paves the way for designing even more advanced KGFMs (coming soon)! 🚀🔍✨
February 24, 2025 at 7:32 PM
6/🧵

Moreover, we plot the similarity matrices for different MOTIF instances and observe that richer motifs indeed yield more distinguishable relation embeddings, thus significantly boosting the link prediction task 📈
February 24, 2025 at 7:32 PM
5/🧵

Empirically, we conduct synthetic experiments to validate the hierarchy of expressive power of MOTIF!🚀

We show that with a simple addition of 3-ary patterns, there is a boost in zero-shot performance over 54 KGs! 📊
February 24, 2025 at 7:32 PM
4/🧵

Theoretically, we show that MOTIF contains a hierarchy of provably more expressive instances by adding additional (higher-order) motifs!

For example, MOTIF with 2-path motifs (e.g., ULTRA) cannot distinguish between r₃(u, v₁) and r₃(u, v₂), but when equipped with 3-path motifs, it can!
February 24, 2025 at 7:32 PM
3/🧵

We introduce a new framework MOTIF for KGFM: a general framework capable of integrating arbitrary graph motifs, capturing existing KGFMs such as ULTRA and InGram.
February 24, 2025 at 7:32 PM
2/🧵

Most existing KGFMs limit themselves to binary motifs (e.g., capturing interactions of two nodes), ignoring higher-order interactions among, e.g., three relations, leading to a loss of expressive power.
February 24, 2025 at 7:32 PM
1/🧵

🔗 www.arxiv.org/abs/2502.13339

Pre-trained KGFMs predict missing links on any KGs with any new entities/relations! This is achieved by learning over shared patterns (aka motifs) across different types of relations. The choice of motifs defines model’s expressivity.
February 24, 2025 at 7:32 PM