Stefan T. Radev
stefanradev.bsky.social
Stefan T. Radev
@stefanradev.bsky.social
Assistant Professor at Rensselaer Polytechnic Institute (RPI)

Bayesian | Computational guy | Name dropper | Deep learner | Book lover

Opinions are my own.
Reposted by Stefan T. Radev
🧠 Check out the classic examples from Bayesian Cognitive Modeling: A Practical Course (Lee & Wagenmakers, 2013), translated into step-by-step tutorials with BayesFlow!

Interactive version: kucharssim.github.io/bayesflow-co...

PDF: osf.io/preprints/ps...
Introduction – Amortized Bayesian Cognitive Modeling
kucharssim.github.io
May 30, 2025 at 2:28 PM
Reposted by Stefan T. Radev
Finite mixture models are useful when data comes from multiple latent processes.

BayesFlow allows:
• Approximating the joint posterior of model parameters and mixture indicators
• Inferences for independent and dependent mixtures
• Amortization for fast and accurate estimation

📄 Preprint
💻 Code
February 11, 2025 at 8:48 AM
Reposted by Stefan T. Radev
A reminder of our talk this Thursday (30th Jan), at 11am GMT. Paul Bürkner (TU Dortmund University), will talk about "Amortized Mixture and Multilevel Models". Sign up at listserv.csv.warwick... to receive the link.
January 27, 2025 at 9:04 AM
Reposted by Stefan T. Radev
4/ Amortized Bayesian Workflow (Extended Abstract)

jointly led by @marvinschmitt.com and @chengkunli.bsky.social , and with @avehtari.bsky.social @paulbuerkner.com @stefanradev.bsky.social

MCMC + amortized methods for the best of both worlds (speed & guarantees!)

arxiv.org/abs/2409.04332
Amortized Bayesian Workflow (Extended Abstract)
Bayesian inference often faces a trade-off between computational speed and sampling accuracy. We propose an adaptive workflow that integrates rapid amortized inference with gold-standard MCMC techniqu...
arxiv.org
December 7, 2024 at 8:50 AM
Reposted by Stefan T. Radev
Any single analysis hides an iceberg of uncertainty.

Sensitivity-aware amortized inference explores the iceberg:
⋅ Test alternative priors, likelihoods, and data perturbations
⋅ Deep ensembles flag misspecification issues
⋅ No model refits required during inference

🔗 openreview.net/forum?id=Kxt...
November 25, 2024 at 10:52 AM
Reposted by Stefan T. Radev
Prior specification is one of the hardest tasks in Bayesian modeling.

In our new paper, we (Florence Bockting, @stefanradev.bsky.social and me) develop a method for expert prior elicitation using generative neural networks and simulation-based learning.

arxiv.org/abs/2411.15826
Expert-elicitation method for non-parametric joint priors using normalizing flows
We propose an expert-elicitation method for learning non-parametric joint prior distributions using normalizing flows. Normalizing flows are a class of generative models that enable exact, single-step...
arxiv.org
November 26, 2024 at 7:11 AM