Nicola Branchini
nicolabranchini.bsky.social
Nicola Branchini
@nicolabranchini.bsky.social
🇮🇹 Stats PhD @ University of Edinburgh 🏴󠁧󠁢󠁳󠁣󠁴󠁿

@ellis.eu PhD - visiting @avehtari.bsky.social 🇫🇮

🤔💭 Monte Carlo, probabilistic ML.

Interested in many things relating to probML, keen to learn applications in climate/science.

https://www.branchini.fun/about
I didn't even mention the concept of "compiti delle vacanze", holiday homework ...
November 3, 2025 at 10:54 AM
If the reviewer / meta-reviewer does a bad job / is not fit for the paper, there is no structure that will fix it of course.
And, the people reviewing will have a significant overlap (I guess) from NeurIPS/ICML/etc..
Still, worth trying to have (healthier?) incentives/structure as in TMLR
October 21, 2025 at 2:19 PM
and forces the reviewer to focus on requested changes.
Of course, I also like TMLR because of its spirit, which for me is about correctness and details rather than significance or importance, which are typically in the eyes of the (powerful) beholder.
October 21, 2025 at 9:45 AM
Reposted by Nicola Branchini
24. arxiv.org/abs/2510.00389
'Zero variance self-normalized importance sampling via estimating equations'
- Art B. Owen

Even with optimal proposals, achieving zero variance with SNIS-type estimators requires some innovative thinking. This work explains how an optimisation formulation can apply.
October 4, 2025 at 4:03 PM
"Conditional Causal Discovery"

(don't be fooled by the title :D )

openreview.net/forum?id=6IY...
October 4, 2025 at 4:01 PM
"Estimating the Probabilities of Rare Outputs in Language Models"

arxiv.org/abs/2410.13211
October 4, 2025 at 4:01 PM
"Stochastic Optimization with Optimal Importance Sampling"

arxiv.org/abs/2504.03560
October 4, 2025 at 4:01 PM
"Io stimo più il trovar un vero, benché di cosa leggiera, che ‘l disputar lungamente delle massime questioni senza conseguir verità nissuna"
August 8, 2025 at 5:20 PM