Neurosymbolic Machine Learning, Generative Models, commonsense reasoning
https://www.emilevankrieken.com/
Read more 👇
We are organizing this #ICLR2026 workshop to bring these three communities together and learn from each other 🦾🔥💥
Submission deadline: 30 Jan 2026
When? 26 or 27 April 2026
Where? Rio de Janeiro, Brazil
Call for papers, schedule, invited speakers & more:
ucrl-iclr26.github.io
Looking forward to your submissions!
We are organizing this #ICLR2026 workshop to bring these three communities together and learn from each other 🦾🔥💥
Submission deadline: 30 Jan 2026
We show how linearity prevent KGEs from scaling to larger graphs + propose a simple solution using a Mixture of Softmaxes (see LLM literature) to break the limitations at a low parameter cost. 🔨
🔨 Rank bottlenecks in KGEs:
At Friday's "Salon des Refuses" I will present @sbadredd.bsky.social 's new work on how rank bottlenecks limit knowledge graph embeddings
arxiv.org/abs/2506.22271
We show how linearity prevent KGEs from scaling to larger graphs + propose a simple solution using a Mixture of Softmaxes (see LLM literature) to break the limitations at a low parameter cost. 🔨
Check out insightful talks from @guyvdb.bsky.social, @tkipf.bsky.social and D McGuinness on our new Youtube channel www.youtube.com/@NeSyconfere...
Topics include using symbolic reasoning for LLM, and object-centric representations!
Check out insightful talks from @guyvdb.bsky.social, @tkipf.bsky.social and D McGuinness on our new Youtube channel www.youtube.com/@NeSyconfere...
Topics include using symbolic reasoning for LLM, and object-centric representations!
We introduce Vision-Language Programs (VLP), a neuro-symbolic framework that combines the perceptual power of VLMs with program synthesis for robust visual reasoning.
We introduce Vision-Language Programs (VLP), a neuro-symbolic framework that combines the perceptual power of VLMs with program synthesis for robust visual reasoning.
🧠 Neurosymbolic Diffusion Models: Thursday's poster session.
Going to NeurIPS? @edoardo-ponti.bsky.social and @nolovedeeplearning.bsky.social will present the paper in San Diego Thu 13:00
arxiv.org/abs/2505.13138
🧠 Neurosymbolic Diffusion Models: Thursday's poster session.
Going to NeurIPS? @edoardo-ponti.bsky.social and @nolovedeeplearning.bsky.social will present the paper in San Diego Thu 13:00
arxiv.org/abs/2505.13138
We invented a new algorithm analysis framework to find out.
We invented a new algorithm analysis framework to find out.
Fear not💪🏻In our #NeurIPS2025 paper we show that you just need to equip your favourite NeSy model with prototypical networks and the reasoning shortcuts will be a problem of the past!
Fear not💪🏻In our #NeurIPS2025 paper we show that you just need to equip your favourite NeSy model with prototypical networks and the reasoning shortcuts will be a problem of the past!
Come check it out if your interested in multilingual linguistic evaluation of LLMs (there will be parse trees on the slides! There's still use for syntactic structure!)
arxiv.org/abs/2504.02768
Come check it out if your interested in multilingual linguistic evaluation of LLMs (there will be parse trees on the slides! There's still use for syntactic structure!)
arxiv.org/abs/2504.02768
LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data
We extend this effort to 45 new languages!
LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data
We extend this effort to 45 new languages!
We show how to efficiently apply Bayesian learning in VLMs, improve calibration, and do active learning. Cool stuff!
📝 arxiv.org/abs/2412.06014
We show how to efficiently apply Bayesian learning in VLMs, improve calibration, and do active learning. Cool stuff!
📝 arxiv.org/abs/2412.06014
We will present Neurosymbolic Diffusion Models in San Diego 🇺🇸 and Copenhagen 🇩🇰 thanks to @euripsconf.bsky.social 🇪🇺
Read more 👇
We will present Neurosymbolic Diffusion Models in San Diego 🇺🇸 and Copenhagen 🇩🇰 thanks to @euripsconf.bsky.social 🇪🇺
Arxiv: arxiv.org/abs/2508.18853
#statssky #mlsky
Arxiv: arxiv.org/abs/2508.18853
#statssky #mlsky
Nikhil Kandpal & Colin Raffel calculate a really low bar for how much it would cost to produce LLM training data with 3.8$\h
Well, several scales more than the compute.
Luckily (?), companies don't pay for the data
🤖📈🧠
Nikhil Kandpal & Colin Raffel calculate a really low bar for how much it would cost to produce LLM training data with 3.8$\h
Well, several scales more than the compute.
Luckily (?), companies don't pay for the data
🤖📈🧠
We will see you 1-4 September in another beautiful place: Lisbon! 🇵🇹
nesy-ai.org/conferences/...
Do objects need a special treatment for generative AI and world models? 🤔 We will hear on Monday!
Do objects need a special treatment for generative AI and world models? 🤔 We will hear on Monday!
We will start with an exciting and timely keynote by
@guyvdb.bsky.social
on "Symbolic Reasoning in the Age of Large Language Models" 👀
📆 Full conference schedule: 2025.nesyconf.org/schedule/
We will start with an exciting and timely keynote by
@guyvdb.bsky.social
on "Symbolic Reasoning in the Age of Large Language Models" 👀
📆 Full conference schedule: 2025.nesyconf.org/schedule/
We got lost in latent space. Join us 👇
We got lost in latent space. Join us 👇
@ulrikeluxburg.bsky.social
Michael Jordan
Emtiyaz Khan
Amnon Shashua
More details to come as we get closer to December, so stay tuned
@ulrikeluxburg.bsky.social
Michael Jordan
Emtiyaz Khan
Amnon Shashua
More details to come as we get closer to December, so stay tuned
And so I keep paying more attention to the fewer people who still write their original thoughts (without LLMs - you can tell how repetitive it gets with them)
And so I keep paying more attention to the fewer people who still write their original thoughts (without LLMs - you can tell how repetitive it gets with them)
Consider becoming a sponsor and support us in making this inaugural event a success! Sponsorship packages are available and can be further customized if necessary.
Reach out if you have any questions ❔
Info: eurips.cc/become-spons...
Consider becoming a sponsor and support us in making this inaugural event a success! Sponsorship packages are available and can be further customized if necessary.
Reach out if you have any questions ❔
Info: eurips.cc/become-spons...
Also, the take "there is nothing new with deep learning, neural nets were around 50y ago" is like "nothing new with humans, amino acides were around 4.4 billion y ago".
Also, the take "there is nothing new with deep learning, neural nets were around 50y ago" is like "nothing new with humans, amino acides were around 4.4 billion y ago".
That said, this is a tiny improvement (~1%) over o1-preview, which was released almost one year ago. Have long-context models hit a wall?
Accuracy of human readers is >97%... Long way to go!
That said, this is a tiny improvement (~1%) over o1-preview, which was released almost one year ago. Have long-context models hit a wall?
Accuracy of human readers is >97%... Long way to go!
It still became a polarization machine.
Then we tried six interventions to fix social media.
The results were… not what we expected.
arxiv.org/abs/2508.03385
It still became a polarization machine.
Then we tried six interventions to fix social media.
The results were… not what we expected.
arxiv.org/abs/2508.03385