Neurosymbolic Machine Learning, Generative Models, commonsense reasoning
https://www.emilevankrieken.com/
Read more 👇
We invented a new algorithm analysis framework to find out.
We invented a new algorithm analysis framework to find out.
Fear not💪🏻In our #NeurIPS2025 paper we show that you just need to equip your favourite NeSy model with prototypical networks and the reasoning shortcuts will be a problem of the past!
Fear not💪🏻In our #NeurIPS2025 paper we show that you just need to equip your favourite NeSy model with prototypical networks and the reasoning shortcuts will be a problem of the past!
Come check it out if your interested in multilingual linguistic evaluation of LLMs (there will be parse trees on the slides! There's still use for syntactic structure!)
arxiv.org/abs/2504.02768
Come check it out if your interested in multilingual linguistic evaluation of LLMs (there will be parse trees on the slides! There's still use for syntactic structure!)
arxiv.org/abs/2504.02768
LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data
We extend this effort to 45 new languages!
LLMs learn from vastly more data than humans ever experience. BabyLM challenges this paradigm by focusing on developmentally plausible data
We extend this effort to 45 new languages!
We show how to efficiently apply Bayesian learning in VLMs, improve calibration, and do active learning. Cool stuff!
📝 arxiv.org/abs/2412.06014
We show how to efficiently apply Bayesian learning in VLMs, improve calibration, and do active learning. Cool stuff!
📝 arxiv.org/abs/2412.06014
We will present Neurosymbolic Diffusion Models in San Diego 🇺🇸 and Copenhagen 🇩🇰 thanks to @euripsconf.bsky.social 🇪🇺
Read more 👇
We will present Neurosymbolic Diffusion Models in San Diego 🇺🇸 and Copenhagen 🇩🇰 thanks to @euripsconf.bsky.social 🇪🇺
Arxiv: arxiv.org/abs/2508.18853
#statssky #mlsky
Arxiv: arxiv.org/abs/2508.18853
#statssky #mlsky
Nikhil Kandpal & Colin Raffel calculate a really low bar for how much it would cost to produce LLM training data with 3.8$\h
Well, several scales more than the compute.
Luckily (?), companies don't pay for the data
🤖📈🧠
Nikhil Kandpal & Colin Raffel calculate a really low bar for how much it would cost to produce LLM training data with 3.8$\h
Well, several scales more than the compute.
Luckily (?), companies don't pay for the data
🤖📈🧠
We will see you 1-4 September in another beautiful place: Lisbon! 🇵🇹
nesy-ai.org/conferences/...
Do objects need a special treatment for generative AI and world models? 🤔 We will hear on Monday!
Do objects need a special treatment for generative AI and world models? 🤔 We will hear on Monday!
We will start with an exciting and timely keynote by
@guyvdb.bsky.social
on "Symbolic Reasoning in the Age of Large Language Models" 👀
📆 Full conference schedule: 2025.nesyconf.org/schedule/
We will start with an exciting and timely keynote by
@guyvdb.bsky.social
on "Symbolic Reasoning in the Age of Large Language Models" 👀
📆 Full conference schedule: 2025.nesyconf.org/schedule/
We got lost in latent space. Join us 👇
We got lost in latent space. Join us 👇
@ulrikeluxburg.bsky.social
Michael Jordan
Emtiyaz Khan
Amnon Shashua
More details to come as we get closer to December, so stay tuned
@ulrikeluxburg.bsky.social
Michael Jordan
Emtiyaz Khan
Amnon Shashua
More details to come as we get closer to December, so stay tuned
And so I keep paying more attention to the fewer people who still write their original thoughts (without LLMs - you can tell how repetitive it gets with them)
And so I keep paying more attention to the fewer people who still write their original thoughts (without LLMs - you can tell how repetitive it gets with them)
Consider becoming a sponsor and support us in making this inaugural event a success! Sponsorship packages are available and can be further customized if necessary.
Reach out if you have any questions ❔
Info: eurips.cc/become-spons...
Consider becoming a sponsor and support us in making this inaugural event a success! Sponsorship packages are available and can be further customized if necessary.
Reach out if you have any questions ❔
Info: eurips.cc/become-spons...
Also, the take "there is nothing new with deep learning, neural nets were around 50y ago" is like "nothing new with humans, amino acides were around 4.4 billion y ago".
Also, the take "there is nothing new with deep learning, neural nets were around 50y ago" is like "nothing new with humans, amino acides were around 4.4 billion y ago".
That said, this is a tiny improvement (~1%) over o1-preview, which was released almost one year ago. Have long-context models hit a wall?
Accuracy of human readers is >97%... Long way to go!
That said, this is a tiny improvement (~1%) over o1-preview, which was released almost one year ago. Have long-context models hit a wall?
Accuracy of human readers is >97%... Long way to go!
It still became a polarization machine.
Then we tried six interventions to fix social media.
The results were… not what we expected.
arxiv.org/abs/2508.03385
It still became a polarization machine.
Then we tried six interventions to fix social media.
The results were… not what we expected.
arxiv.org/abs/2508.03385
Not only it contains most of my work, but there is plenty of brand new content:
publikationen.sulb.uni-saarland.de/handle/20.50...
🧵1/4
Not only it contains most of my work, but there is plenty of brand new content:
publikationen.sulb.uni-saarland.de/handle/20.50...
🧵1/4
Check it out for
⭐️ gorgeous figures (with new additions!) on topology, algebra, and geometry in the field
⭐️ broken down tables for easy reading
⭐️ accessible text, additional refs, and more
iopscience.iop.org/article/10.1...
Check it out for
⭐️ gorgeous figures (with new additions!) on topology, algebra, and geometry in the field
⭐️ broken down tables for easy reading
⭐️ accessible text, additional refs, and more
iopscience.iop.org/article/10.1...