Emile van Krieken
emilevankrieken.com
Emile van Krieken
@emilevankrieken.com
Post-doc @ VU Amsterdam, prev University of Edinburgh.
Neurosymbolic Machine Learning, Generative Models, commonsense reasoning

https://www.emilevankrieken.com/
🍇Happy to share GRAPES 🍇

Make GNNs work on large graphs by learning an expressive and adaptive sampler 🚀

Excellent work led by Taraneh Younesian now in TMLR!
openreview.net/forum?id=QI0...
July 23, 2025 at 11:36 AM
Hmm, this rule seems very recently changed? web.archive.org/web/20250514...
This was what it said just before the deadline, so people submitting to NeurIPS couldn't have known about the mandatory physical presence 🤔
July 17, 2025 at 2:16 PM
This is a new experience: people using AI to overhype your paper 🫢

@neuralnoise.com
May 22, 2025 at 6:30 AM
On benchmarks for visual path planning and problems affected by reasoning shortcuts, we achieve state-of-the-art performance while showing scalability and strong calibration!
May 21, 2025 at 10:57 AM
We derive a principled loss function using a new result that may be of broader interest to the masked diffusion community.

Our loss function reconstructs the output dimensions separately, decomposing the problem!
May 21, 2025 at 10:57 AM
How to go beyond independence? This is where powerful discrete diffusion models come in!

Every unmasking step is an independent distribution… Neurosymbolic Diffusion Models (NeSyDMs) exploit this for efficient probabilistic reasoning while guaranteeing global dependencies 🚀
May 21, 2025 at 10:57 AM
Classical NeSy methods assume conditional independence over concepts! But in arxiv.org/abs/2404.08458 (ICML’24) we show many issues with this assumption, from challenging optimisation to poor calibration 👇

x.com/EmilevanKrie...
May 21, 2025 at 10:57 AM
Neurosymbolic (NeSy) predictors use neural networks to extract high-level, symbolic concepts from an input, and then use a symbolic program to reason over these concepts.

The concept extraction phase is completely unsupervised: No access to true concepts during training!
May 21, 2025 at 10:57 AM
We propose Neurosymbolic Diffusion Models! We find diffusion is especially compelling for neurosymbolic approaches, combining powerful multimodal understanding with symbolic reasoning 🚀

Read more 👇
May 21, 2025 at 10:57 AM
Looks like they made it more lenient!
- Better matching (I hope!)
- 100 -> 400 papers to bid on
May 18, 2025 at 3:37 PM
@neuripsconf.bsky.social This search functionality also seems disabled
May 17, 2025 at 1:10 PM
The cover for the daily star is even better IMO!

@rohit-saxena.bsky.social @aryopg.bsky.social @neuralnoise.com
March 17, 2025 at 5:10 PM
Hahahaha they even have some kind of case for attaching it to a phone 🤣 Just asking to be sherlocked by Apple/Google.
January 10, 2025 at 10:11 AM
This is a nice example of LLM slop collapse haha
December 27, 2024 at 4:27 AM
🦕 Neurosymbolic heuristic search goes brrr 🚀

But let's make it more efficient than $3200 per query 🙃

@fchollet.bsky.social
December 21, 2024 at 8:54 AM
Today @jumelet.bsky.social is defending 🎉
He's organising a nice workshop today on the intersection of LLMs and linguistics. He just kicked it off with this work with AO @wzuidema.bsky.social on using LLMs to improve our understanding of adjective order
December 10, 2024 at 9:04 AM
Wish I knew about this much earlier: In VS Code (or Cursor 😉) you can install a 'data wrangler' extension to inspect the values of your PyTorch Tensors in a nice UI with summary statistics.

Just right-click on a tensor in your variables view!
December 2, 2024 at 2:21 PM
I don't think it got the memo
November 30, 2024 at 9:30 AM
Dit is nu aangepast, de gemeente zou het verbieden als ze het niet zou worden afgelast.
November 13, 2024 at 3:30 PM
November 13, 2024 at 7:32 AM
Indeed! I treat my Obsidian vault as a Knowledge Graph with typed links for the metadata, the template is created from my Bibtex importer :)
November 10, 2024 at 11:22 AM