Yuli Slavutsky
yulislavutsky.bsky.social
Yuli Slavutsky
@yulislavutsky.bsky.social
Stats Postdoc at Columbia, @bleilab.bsky.social
Statistical ML, Generalization, Uncertainty, Empirical Bayes
https://yulisl.github.io/
Uncertainty estimation fails under distribution shifts. Why? Partly because in stats, even Bayesian stats, we treat x as given. But intuitively data makes different models plausible. For reliable uncertainty, we need to account for it explicitly. Come chat with me about it tomorrow at my poster
NeurIPS Poster Quantifying Uncertainty in the Presence of Distribution ShiftsNeurIPS 2025
neurips.cc
December 3, 2025 at 1:00 AM
Reposted by Yuli Slavutsky
Hello!

We will be presenting Estimating the Hallucination Rate of Generative AI at NeurIPS. Come if you'd like to chat about epistemic uncertainty for In-Context Learning, or uncertainty more generally. :)

Location: East Exhibit Hall A-C #2703
Time: Friday @ 4:30
Paper: arxiv.org/abs/2406.07457
December 12, 2024 at 6:13 PM
Reposted by Yuli Slavutsky
The circuit hypothesis proposes that LLM capabilities emerge from small subnetworks within the model. But how can we actually test this? 🤔

joint work with @velezbeltran.bsky.social @maggiemakar.bsky.social @anndvision.bsky.social @bleilab.bsky.social Adria @far.ai Achille and Caro
December 10, 2024 at 6:36 PM
I'm on my way to #NeurIPS2024. On Friday I'm going to present my latest paper with Yuval Benjamini. The gist is in the comments, and come chat with me to hear more!
December 10, 2024 at 10:07 PM
Reposted by Yuli Slavutsky
I am very excited to share our new Neurips 2024 paper + package, Treeffuser! 🌳 We combine gradient-boosted trees with diffusion models for fast, flexible probabilistic predictions and well-calibrated uncertainty.

paper: arxiv.org/abs/2406.07658
repo: github.com/blei-lab/tre...

🧵(1/8)
December 2, 2024 at 9:48 PM