Juan L. Gamella, Ozan Sener, Jens Behrmann, Guillermo Sapiro, Jörn Jacobsen, Marco Cuturi.
We hope RoPE helps reframe model misspecification as a learning problem that requires real-world data to be solved.
Juan L. Gamella, Ozan Sener, Jens Behrmann, Guillermo Sapiro, Jörn Jacobsen, Marco Cuturi.
We hope RoPE helps reframe model misspecification as a learning problem that requires real-world data to be solved.
We hope RoPE pushes SBI toward:
✅ Embracing real-world constraints
✅ Blending domain knowledge + data
✅ Treating robust inference as a learning problem whose objective must be tight with the downstream application for the result of this inference
We hope RoPE pushes SBI toward:
✅ Embracing real-world constraints
✅ Blending domain knowledge + data
✅ Treating robust inference as a learning problem whose objective must be tight with the downstream application for the result of this inference
1️⃣ Uses a small calibration set of real (x, θ) pairs
2️⃣ Learns a correction from simulated to real obs using optimal transport
3️⃣ Enables simulation-based inference you can actually trust
1️⃣ Uses a small calibration set of real (x, θ) pairs
2️⃣ Learns a correction from simulated to real obs using optimal transport
3️⃣ Enables simulation-based inference you can actually trust
RoPE reframes misspecification as a posterior inaccuracy problem, not a simulator/data mismatch in contrast to how model misspecification is often defined in the literature.
RoPE reframes misspecification as a posterior inaccuracy problem, not a simulator/data mismatch in contrast to how model misspecification is often defined in the literature.
Neural SBI doesn’t — unless we teach it how.
🔑 Insight: To make SBI robust, show it real-world data.
And use labeled data to trust the newly learnt inference pipeline.
Neural SBI doesn’t — unless we teach it how.
🔑 Insight: To make SBI robust, show it real-world data.
And use labeled data to trust the newly learnt inference pipeline.
🧠 Neural SBI often overfits to quirks in simulators.
🤔 Simpler methods (like ABC with handcrafted stats) often perform better when simulators are slightly wrong.
🧠 Neural SBI often overfits to quirks in simulators.
🤔 Simpler methods (like ABC with handcrafted stats) often perform better when simulators are slightly wrong.
Real-world practitioners always ask:
“But what if the simulator is off?”
I used to think this was an issue related to the simulator and not to SBI. Now I believe this is the central issue with existing SBI algorithms.
Real-world practitioners always ask:
“But what if the simulator is off?”
I used to think this was an issue related to the simulator and not to SBI. Now I believe this is the central issue with existing SBI algorithms.