Diana Cai
banner
dianarycai.bsky.social
Diana Cai
@dianarycai.bsky.social
Machine learning & statistics researcher @ Flatiron Institute. Posts on probabilistic ML, Bayesian statistics, decision making, and AI/ML for science.
www.dianacai.com
Check out my poster today (Thurs) at 11am--2pm session. Exhibit Hall C,D,E Poster Location: #602

"Fisher meets Feynman: score-based variational inference with a product of experts" (NeurIPS spotlight)

with Robert Gower, David Blei, and Lawrence Saul
@flatironinstitute.org #NeurIPS2025
December 4, 2025 at 3:19 PM
🔬 Data acquisition for expensive scientific workflows

I design methods that guide what to measure next in costly experiments and simulations, e.g., in materials design, including physics-aware active search algorithms to accelerate stability predictions, and in genomics.
November 7, 2025 at 2:47 PM
🧠 Black-box inference

I develop black-box probabilistic inference algorithms, including:
* Fast, flexible variational inference via score matching
* MCMC including for costly scientific simulations
* Simulation-based inference in misspecified & hierarchical settings
November 7, 2025 at 2:47 PM
I'm on the academic job market!

I design and analyze probabilistic machine-learning methods---motivated by real-world scientific constraints, and developed in collaboration with scientists in biology, chemistry, and physics.

A few highlights of my research areas are:
November 7, 2025 at 2:47 PM
Enter the Feynman identity, originally developed for loop integrals in quantum field theory:

It expresses a product of multiple fractions as an integral over the simplex.

➡️ The PoE becomes a continuous mixture of t's & then gives us a way to estimate Z and sample from the PoE
October 27, 2025 at 12:51 PM
We construct a variational family that's a weighted product of multivariate t-experts. It captures skew, heavy tails, and multi-modality.

Products of experts (PoEs) are powerful -- but the normalizing constant Z is usually intractable and sampling is hard.
October 27, 2025 at 12:51 PM
Fisher meets Feynman! 🤝

We use score matching and a trick from quantum field theory to make a product-of-experts family both expressive and efficient for variational inference.

To appear as a spotlight @ NeurIPS 2025.
#NeurIPS2025 (link below)
October 27, 2025 at 12:51 PM
Saw a talk yesterday where they mentioned the automatic statistician. What a fun throwback 😊
October 24, 2025 at 3:27 PM
Come check out our #AISTATS2025 poster Sunday Hall A—E 79.

Previously we showed how to fit a full cov Gaussian approx via “batch and (score) match.” Here we show how to make the update cheaper using a “patch” step that projects the update to one that is low rank + diagonal.
May 2, 2025 at 1:41 PM
Come by our #NeurIPS2024 spotlight poster on EigenVI: score-based variational inference with orthogonal function expansions!

Friday 7:30pm, East Exhibit Hall A-C #3900

Link: arxiv.org/abs/2410.24054
December 11, 2024 at 8:08 PM
Heading to #NeurIPS2024 next week to present EigenVI: score-based variational inference with
orthogonal function expansions
Link: arxiv.org/abs/2410.24054
Poster: Friday 7:30pm, East Exhibit Hall A-C #3900
December 6, 2024 at 4:41 PM