This highlights the need for better model design for parameter estimation. 🚀
Open-sourced code: github.com/sarthmit/par...
This highlights the need for better model design for parameter estimation. 🚀
Open-sourced code: github.com/sarthmit/par...
🔹 Gaussian approx.
🔹 Normalizing Flows
🔹 Advanced models: Diffusion, Flow-Matching, Iterated Denoising Energy Matching
Surprising insight next!
🔹 Gaussian approx.
🔹 Normalizing Flows
🔹 Advanced models: Diffusion, Flow-Matching, Iterated Denoising Energy Matching
Surprising insight next!
For point estimation, we use:
🔹 Maximum Likelihood (MLE)
🔹 Maximum-a-Posteriori (MAP)
But what about posterior estimation?
For point estimation, we use:
🔹 Maximum Likelihood (MLE)
🔹 Maximum-a-Posteriori (MAP)
But what about posterior estimation?
Through extensive in- and out-of-distribution evaluations, we compare point estimation vs. full posterior estimation.
Through extensive in- and out-of-distribution evaluations, we compare point estimation vs. full posterior estimation.
Two approaches:
📌 Point Estimation (MLE/MAP) – Optimizes for a single parameter value
📊 Full Posterior Estimation – Approximates the full distribution (MCMC, VI)
Which is best for amortized inference? We find out! 👇
Two approaches:
📌 Point Estimation (MLE/MAP) – Optimizes for a single parameter value
📊 Full Posterior Estimation – Approximates the full distribution (MCMC, VI)
Which is best for amortized inference? We find out! 👇
Code: github.com/sarthmit/par...
Code: github.com/sarthmit/par...
Posterior over parameters for new datasets provided in-context through just inference instead of MCMC, etc.
Fun connections to learned optimizers, meta-learning, etc.
Posterior over parameters for new datasets provided in-context through just inference instead of MCMC, etc.
Fun connections to learned optimizers, meta-learning, etc.
ICL in LLMs: p(ans | question, examples) for different examples
Multi-task (RL or otherwise) = p(next action | environment) for different environments
Key insight: Train across diverse contexts using a shared language.
ICL in LLMs: p(ans | question, examples) for different examples
Multi-task (RL or otherwise) = p(next action | environment) for different environments
Key insight: Train across diverse contexts using a shared language.