Emile van Krieken
emilevankrieken.com
Emile van Krieken
@emilevankrieken.com
Post-doc @ VU Amsterdam, prev University of Edinburgh.
Neurosymbolic Machine Learning, Generative Models, commonsense reasoning

https://www.emilevankrieken.com/
Pinned
We propose Neurosymbolic Diffusion Models! We find diffusion is especially compelling for neurosymbolic approaches, combining powerful multimodal understanding with symbolic reasoning 🚀

Read more 👇
Reposted by Emile van Krieken
X is hiring a creative writing specialist at $40 an hour to make Grok better at writing and a true LOL at the qualifications
January 30, 2026 at 8:14 PM
Reposted by Emile van Krieken
New open source: cuthbert 🐛

State space models with all the hotness: (temporally) parallelisable, JAX, Kalman, SMC
January 30, 2026 at 4:26 PM
Reposted by Emile van Krieken
Best conference with the best people and in the best place 😎 😜

Also the submission deadline is conveniently one month later than #ICML2026, just in case you needed it 😅
Submissions are now open for UAI 2026 in Amsterdam. Deadline is February 25th, 2026 (23:59 AoE). Call for papers here: www.auai.org/uai2026/call...
Uncertainty in Artificial Intelligence
www.auai.org
January 27, 2026 at 1:20 PM
Reposted by Emile van Krieken
🦕The 20th conference on Neurosymbolic AI will be in Lisbon, Portugal, September 1-4, 2026!

The CFP is out: 2026.nesyconf.org/call-for-pap... with two phases:
🚨 Deadline 1: Feb 24 (abstract), Mar 3 (full)
🚨 Deadline 2: Jun 9 (abstract), Jun 16 (full)

#neurosymbolic #NeSy2026
Call for Papers
NeSy AI is the association for neurosymbolic Artificial Intelligence. It runs NeSy, the premier international conference on neural-symbolic learning and reasoning, yearly since 2005, with a focus on n...
2026.nesyconf.org
January 20, 2026 at 3:35 PM
Reposted by Emile van Krieken
We introduce epiplexity, a new measure of information that provides a foundation for how to select, generate, or transform data for learning systems. We have been working on this for almost 2 years, and I cannot contain my excitement! arxiv.org/abs/2601.03220 1/7
January 7, 2026 at 5:28 PM
Good call! I maintain a list of Neurosymbolic folks on Bsky, see here 🦕
go.bsky.app/RMJ8q3i
Seems like a bunch of people are joining BlueSky recently after the whole X situation, so maybe it's worth highlighting this again. Also, if you feel like you should be on this list, send me a message!
I made a starter pack for Bayesian ML and stats (mostly to see how this starter pack business works).

Let me know whom I missed!

go.bsky.app/2Bqtn6T
January 13, 2026 at 9:36 AM
Reposted by Emile van Krieken
I am recruiting 1 PhD student (4-year position) and 2 postdocs (3-year positions) to work on logic and machine learning at the University of Helsinki:
- PhD 1: jobs.helsinki.fi/job/Helsinki...
- Postdoc 1: jobs.helsinki.fi/job/Helsinki...
-Postdoc 2: jobs.helsinki.fi/job/Helsinki...
January 10, 2026 at 3:01 PM
Reposted by Emile van Krieken
#XAI, #neurosymbolic methods #nesy and #causal #representation #learning #CRL all care about learning #interpretable #concepts, but in different ways.

We are organizing this #ICLR2026 workshop to bring these three communities together and learn from each other 🦾🔥💥

Submission deadline: 30 Jan 2026
📣 Announcing the Workshop on **Unifying Concept Representation Learning** (UCRL) at ICLR’26 (@iclr-conf.bsky.social ).

When? 26 or 27 April 2026
Where? Rio de Janeiro, Brazil

Call for papers, schedule, invited speakers & more:

ucrl-iclr26.github.io

Looking forward to your submissions!
Home
Workshop on Unifying Concept Representation Learning (ICLR 2026)
ucrl-iclr26.github.io
December 22, 2025 at 4:41 PM
Reposted by Emile van Krieken
Emile will present our work on Knowledge Graph Embeddings at Eurips' Salon des Refusés on Friday!
We show how linearity prevent KGEs from scaling to larger graphs + propose a simple solution using a Mixture of Softmaxes (see LLM literature) to break the limitations at a low parameter cost. 🔨
And finally #3

🔨 Rank bottlenecks in KGEs:

At Friday's "Salon des Refuses" I will present @sbadredd.bsky.social 's new work on how rank bottlenecks limit knowledge graph embeddings
arxiv.org/abs/2506.22271
December 3, 2025 at 4:12 PM
Reposted by Emile van Krieken
Recordings of the NeSy 2025 keynotes are now available! 🎥

Check out insightful talks from @guyvdb.bsky.social, @tkipf.bsky.social and D McGuinness on our new Youtube channel www.youtube.com/@NeSyconfere...

Topics include using symbolic reasoning for LLM, and object-centric representations!
NeSy conference
The NeSy conference studies the integration of deep learning and symbolic AI, combining neural network-based statistical machine learning with knowledge representation and reasoning from symbolic appr...
www.youtube.com
November 29, 2025 at 8:21 AM
Reposted by Emile van Krieken
🚨 New paper alert!
We introduce Vision-Language Programs (VLP), a neuro-symbolic framework that combines the perceptual power of VLMs with program synthesis for robust visual reasoning.
November 30, 2025 at 1:32 AM
Almost off to @euripsconf.bsky.social in Copenhagen 🇩🇰 🇪🇺! I'll present 3 posters:

🧠 Neurosymbolic Diffusion Models: Thursday's poster session.

Going to NeurIPS? @edoardo-ponti.bsky.social and @nolovedeeplearning.bsky.social will present the paper in San Diego Thu 13:00
arxiv.org/abs/2505.13138
November 28, 2025 at 5:31 PM
Reposted by Emile van Krieken
The simplex algorithm is super efficient. 80 years of experience says it runs in linear time. Nobody can explain _why_ it is so fast.

We invented a new algorithm analysis framework to find out.
Beyond Smoothed Analysis: Analyzing the Simplex Method by the Book
Narrowing the gap between theory and practice is a longstanding goal of the algorithm analysis community. To further progress our understanding of how algorithms work in practice, we propose a new alg...
arxiv.org
October 27, 2025 at 1:43 AM
Reposted by Emile van Krieken
Want to use your favourite #NeSy model but afraid of the reasoning shortcuts?🫣

Fear not💪🏻In our #NeurIPS2025 paper we show that you just need to equip your favourite NeSy model with prototypical networks and the reasoning shortcuts will be a problem of the past!
November 6, 2025 at 10:40 AM
Reposted by Emile van Krieken
I'm in Suzhou to present our work on MultiBLiMP, Friday @ 11:45 in the Multilinguality session (A301)!

Come check it out if your interested in multilingual linguistic evaluation of LLMs (there will be parse trees on the slides! There's still use for syntactic structure!)

arxiv.org/abs/2504.02768
November 6, 2025 at 7:08 AM
Reposted by Emile van Krieken
🌍Introducing BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data!

LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data

We extend this effort to 45 new languages!
October 15, 2025 at 10:53 AM
Reposted by Emile van Krieken
Unfortunately, our submission to #NeurIPS didn’t go through with (5,4,4,3). But because I think it’s an excellent paper, I decided to share it anyway.

We show how to efficiently apply Bayesian learning in VLMs, improve calibration, and do active learning. Cool stuff!

📝 arxiv.org/abs/2412.06014
Post-hoc Probabilistic Vision-Language Models
Vision-language models (VLMs), such as CLIP and SigLIP, have found remarkable success in classification, retrieval, and generative tasks. For this, VLMs deterministically map images and text descripti...
arxiv.org
September 18, 2025 at 8:34 PM
Accepted to NeurIPS! 😁

We will present Neurosymbolic Diffusion Models in San Diego 🇺🇸 and Copenhagen 🇩🇰 thanks to @euripsconf.bsky.social 🇪🇺
We propose Neurosymbolic Diffusion Models! We find diffusion is especially compelling for neurosymbolic approaches, combining powerful multimodal understanding with symbolic reasoning 🚀

Read more 👇
September 19, 2025 at 9:01 AM
Reposted by Emile van Krieken
A really neat paper thinking through what "Identifiability" means, how we can determine it, and implications for modeling.

Arxiv: arxiv.org/abs/2508.18853

#statssky #mlsky
September 16, 2025 at 1:49 AM
Reposted by Emile van Krieken
The most expensive part of training is the data, not the compute
Nikhil Kandpal & Colin Raffel calculate a really low bar for how much it would cost to produce LLM training data with 3.8$\h
Well, several scales more than the compute.
Luckily (?), companies don't pay for the data
🤖📈🧠
September 12, 2025 at 2:20 PM
Organising NeSy 2025 together with some of my favourite people (@e-giunchiglia.bsky.social @lgilpin.bsky.social @pascalhitzler.bsky.social) really was a dream. Super proud of what we've achieved! See you next year 😘
🦕NeSy 2025 is officially closed! Thanks again to everyone for attending this successful edition 😊

We will see you 1-4 September in another beautiful place: Lisbon! 🇵🇹
nesy-ai.org/conferences/...
September 11, 2025 at 3:47 AM
Reposted by Emile van Krieken
Our second keynote speaker is @tkipf.bsky.social, who will discuss object-centric representation learning!

Do objects need a special treatment for generative AI and world models? 🤔 We will hear on Monday!
September 6, 2025 at 8:51 PM
Reposted by Emile van Krieken
It is almost time to welcome you all in Santa Cruz! 🦕

We will start with an exciting and timely keynote by
@guyvdb.bsky.social
on "Symbolic Reasoning in the Age of Large Language Models" 👀

📆 Full conference schedule: 2025.nesyconf.org/schedule/
September 5, 2025 at 1:28 PM
Reposted by Emile van Krieken
Does a smaller latent space lead to worse generation in latent diffusion models? Not necessarily! We show that LDMs are extremely robust to a wide range of compression rates (10-1000x) in the context of physics emulation.

We got lost in latent space. Join us 👇
September 3, 2025 at 1:40 PM
Reposted by Emile van Krieken
We are immensely excited to announce our 4 leading researchers within our community as the #EurIPS keynote speakers🎉

@ulrikeluxburg.bsky.social

Michael Jordan

Emtiyaz Khan

Amnon Shashua

More details to come as we get closer to December, so stay tuned
August 29, 2025 at 7:31 AM