Luigi Acerbi
banner
lacerbi.bsky.social
Luigi Acerbi
@lacerbi.bsky.social
Assoc. Prof. of Machine & Human Intelligence | Univ. Helsinki & Finnish Centre for AI (FCAI) | Bayesian ML & probabilistic modeling | https://lacerbi.github.io/
Pinned
New blog post!

You can train a neural network to *just do things* -- such as *predict the optimum of a function*. But how do you get a big training dataset of "functions with known optima"?

Read the blog post to find out! (link 👇)
Very interesting work on emerging object-binding representations in vision transformers by @kordinglab.bsky.social arxiv.org/abs/2510.24709

This might seem an oddly specific property but the good ol' binding problem reflects a fundamental primitive of cognition, epistemology, you name it.
Does Object Binding Naturally Emerge in Large Pretrained Vision Transformers?
Object binding, the brain's ability to bind the many features that collectively represent an object into a coherent whole, is central to human cognition. It groups low-level perceptual features into h...
arxiv.org
November 3, 2025 at 10:21 AM
I answered in thread but it might be of interest.

My current advice is to use the stderr of the median (bootstrapped), with a separate metric for reliability if needed.

The latter is less standard - we have been reporting e.g. 80% perf quantile and its stderr (bootstrapped).
What's with machine learning researchers always reporting standard deviation instead of standard error? My understanding is that the error bars are typically used to back up inferential claims about significant differences between sample means (although statistical tests are rare, another problem).
October 26, 2025 at 12:52 PM
Reposted by Luigi Acerbi
Claude, GPT-5, Gemini, and Kimi: "write me a horror story done entirely in the dedications to six books (you can give me the title and author of each book as well)"

ChatGPT and Claude did pretty well in different ways. Kimi did the usual (sounds good but meaning falls apart).
October 26, 2025 at 2:56 AM
**Submission DL extended to Oct 25!**

You are still in time to submit a short abstract on amortized (simulator-based) inference (SBI), neural processes, prior-fitted networks (PFNs), amortized experimental design, foundation models, and related applications.

Call: sites.google.com/view/amortiz...
October 17, 2025 at 11:02 AM
Reminder that Amortized inference Workshop submissions at the ELLIS Unconference are still open until **Oct 16, 2025**.

Only a short abstract (½ page), so go ahead!

Workshop: Dec 2, 2025, co-located with EurIPS.
Website: sites.google.com/view/amortiz...
October 14, 2025 at 10:16 AM
Do you like to train neural networks to solve all your nasty probabilistic inference and sequential design problems?
Do you love letter salads such as NPs, PFNs, NPE, SBI, BED?

Then no place is better than the Amortized ProbML workshop we are organizing at #ELLIS UnConference.
October 1, 2025 at 1:57 PM
Reposted by Luigi Acerbi
And new paper out: Pleias 1.0: the First Family of Language Models Trained on Fully Open Data

How we train an open everything model on a new pretraining environment with releasable data (Common Corpus) with an open source framework (Nanotron from HuggingFace).

www.sciencedirect.com/science/arti...
September 27, 2025 at 11:44 AM
Reposted by Luigi Acerbi
@gershbrain.bsky.social and I have a new paper in PLOS Comp Bio!

We study how two cognitive constraints—action consideration set size & policy complexity—interact in context-dependent decision making, and how humans exploit their synergy to reduce behavioral suboptimality.

osf.io/preprints/ps...
OSF
osf.io
August 19, 2025 at 3:56 AM
Reposted by Luigi Acerbi
Posterior predictive checking of binary, categorical and many ordinal models with bar graphs is useless. Even the simplest models without covariates usually have such intercept terms that category specific probabilities are learned perfectly. Can you guess which model, 1 or 2, is misspecifed? 1/4
August 13, 2025 at 2:34 PM
Remember to follow the official rebuttal guide.
July 27, 2025 at 7:08 AM
Superlative initiative! Massive kudos to the organisation team for getting this going in such a short time!
EurIPS is coming! 📣 Mark your calendar for Dec. 2-7, 2025 in Copenhagen 📅

EurIPS is a community-organized conference where you can present accepted NeurIPS 2025 papers, endorsed by @neuripsconf.bsky.social and @nordicair.bsky.social and is co-developed by @ellis.eu

eurips.cc
July 17, 2025 at 5:13 AM
Reposted by Luigi Acerbi
Holy shit
Proposed NOAA budget zeros out ALL climate laboratories and cooperative institutions.

GFDL, NSSL, GML, etc.

This appears to also end the US greenhouse gas sampling network, including at Mauna Loa, the oldest continuous carbon dioxide monitoring site on Earth.

www.commerce.gov/sites/defaul...
June 30, 2025 at 11:56 PM
Reposted by Luigi Acerbi
The singularity is awesome
June 28, 2025 at 5:19 PM
New blog post!

You can train a neural network to *just do things* -- such as *predict the optimum of a function*. But how do you get a big training dataset of "functions with known optima"?

Read the blog post to find out! (link 👇)
June 26, 2025 at 2:29 PM
Reposted by Luigi Acerbi
If you’re not attending Trump’s military exploitation birthday party in DC you might want to consider joining one of the “No Kings” events happening across the country on June 14. My guest today @ezralevin.bsky.social from Indivisible explains how you can get involved.
youtu.be/iJC455bmkbo
THAT PARADE COSTS HOW MUCH?
YouTube video by Jim Acosta
youtu.be
May 28, 2025 at 2:38 AM
Reposted by Luigi Acerbi
In anti-authoritarian struggles, criticizing the regime for their blatant corruption is often one of the most important ways to mobilize the public and break through to regime supporters.

The Trump regime is the most corrupt in US history—it’s worth repeating ad nauseam
May 28, 2025 at 1:26 AM
That's a pity. I am not a fan of this place either due to the lack of interesting AI/ML discussions, and that's the only reason I still set foot in Mordor.

A shiver goes through my spine at the thought that LinkedIn might be a viable alternative.
After consideration, I will post occasionally, but heavily censor what I share compared to other sites.

I tried making the transition, but talking about AI here is just really fraught in ways that are tough to mitigate & make it hard to have good discussions (the point of social!). Maybe it changes
May 26, 2025 at 8:22 AM
Reposted by Luigi Acerbi
sometimes you need to exorcise images from your mind
May 22, 2025 at 1:21 PM
Reposted by Luigi Acerbi
We propose Neurosymbolic Diffusion Models! We find diffusion is especially compelling for neurosymbolic approaches, combining powerful multimodal understanding with symbolic reasoning 🚀

Read more 👇
May 21, 2025 at 10:57 AM
Reposted by Luigi Acerbi
🔥 DENIRO AT CANNES: “In my country we’re fighting like hell for the democracy we once took for granted… Art looks for truth, embraces diversity — that’s why we are a threat to autocrats and fascists.” 🇺🇸
May 13, 2025 at 11:40 PM
Reposted by Luigi Acerbi
If you add "also Cthulhu-y" to the prompt, the results are pretty great.
May 9, 2025 at 4:58 AM
Reposted by Luigi Acerbi
I had to do it
April 18, 2025 at 6:54 PM
TIL so many missed opportunities for biology and neuroscience textbooks.

He should have had a guest appearance in the League of Extraordinary Gentlemen.

(The man on the right is Santiago Ramón y Cajal.)
That he was an old-timey body builder is the absolute top of my favourite neuroscience related facts, him flexing is the greatest scientist photo I've ever seen.

Related: Oliver Sacks repping a 600lbs Squat.... Incredible.
May 8, 2025 at 11:51 AM
@huangdaolang.bsky.social will be presenting our work tomorrow on pretrained/amortized transformers for all sorts of inference tasks, go say hi!
#AISTATS2025

Poster session 1
Place: Hall A-E
Time: Sat 3 May 3 p.m. — 6 p.m.
Poster number: 23
1/ Introducing ACE (Amortized Conditioning Engine)! Our new AISTATS 2025 paper presents a transformer framework that unifies tasks from image completion to BayesOpt & simulator-based inference under *one* probabilistic conditioning approach. It's Bayes all the way down!
May 2, 2025 at 6:27 AM
Reposted by Luigi Acerbi
AskHistorians has joined with thirty other research- and academia-focused subreddits to issue this statement condemning recent and ongoing efforts to undermine scholarly research and intellectual freedom in the United States.

Please share widely!

www.reddit.com/r/AskHistori...
From the AskHistorians community on Reddit
Explore this post and more from the AskHistorians community
www.reddit.com
April 29, 2025 at 1:41 PM