Arik Reuter
arikreuter.bsky.social
Arik Reuter
@arikreuter.bsky.social
University of Cambridge and
Max Planck Institute for Intelligent Systems

I'm interested in amortized inference/PFNs/in-context learning for challenging probabilistic and causal problems.

https://arikreuter.github.io/
We are excited to see so many diverse ideas around this concept being actively researched and believe that this paradigm could be the “new wave” of causal ML. We look forward to many interesting discussions and collaborations in the future!

[7/7]
September 25, 2025 at 9:25 AM
as well as Vahid Balazadeh and Hamidreza Kamkari (“CausalPFN: Amortized Causal Effect Estimation via In-Context Learning”) on strong concurrent works applying PFNs to causal inference also accepted to NeurIPS.

[6/7]
September 25, 2025 at 9:25 AM
We’d also like to congratulate Anish Dhir and Cristiana Diaconu (“Estimating Interventional Distributions with Uncertain Causal Graphs through Meta-Learning”),

[5/7]
September 25, 2025 at 9:25 AM
Do-PFN can be used to answer interventional questions such as: “what is the effect of a certain medication?”. We demonstrate through extensive experiments that Do-PFN is a highly effective method whose working principles could transform for the whole field of causal machine learning.

[4/7]
September 25, 2025 at 9:25 AM
Do-PFN is a Prior-Data-Fitted network (PFN) for causal effect estimation. Based on TabPFN, Do-PFN is trained on millions of synthetic datasets and learns to predict causal effects from real-world observational studies.

[3/7]
September 25, 2025 at 9:25 AM
This is joint work with Siyuan Guo, Noah Hollmann, Frank Hutter, and Bernhard Schölkopf from the University of Freiburg, the MPI for Intelligent Systems, the University of Cambridge, ELLIS Institute Tübingen and Prior Labs. We’d like to thank our amazing team for making this project possible.

[2/7]
September 25, 2025 at 9:25 AM
This is joint work with Jake Robertson @jakemrobertson.bsky.social (shared) Siyuan Guo, Noah Hollmann, Frank Hutter, and Bernhard Schölkopf

Checkout the paper at: arxiv.org/abs/2506.06039

[8/8]
Do-PFN: In-Context Learning for Causal Effect Estimation
Estimation of causal effects is critical to a range of scientific disciplines. Existing methods for this task either require interventional data, knowledge about the ground truth causal graph, or rely...
arxiv.org
June 10, 2025 at 9:33 AM
How does it relate to TabPFN?

Do-PFN is based on the same principles as TabPFN and thus directly inherits its strengths. While TabPFN is state-of-the-art for making predictions, Do-PFN excels at inferring causal effects. [7/8]
June 10, 2025 at 9:33 AM
Why is it different?

Do-PFN is a radical new approach to causal inference, replacing standard assumptions of a ground-truth causal model (Pearl) or structural assumptions (Rubin) with a prior over SCMs—our modeling assumptions lie in our synthetic data-generating process. [6/8]
June 10, 2025 at 9:33 AM
How does it work?

Pre-trained on synthetic data sets drawn from structural causal models (SCMs), Do-PFN learns across millions of causal structures. For each causal structure Do-PFN learns to predict the effect of causal interventions based on simulated interventions. [5/8]
June 10, 2025 at 9:33 AM
What is our approach?

Do-PFN is a prior-data-fitted network (PFN) for causal effect estimation. Based on TabPFN, Do-PFN relies solely on observational data and does not require exact knowledge about how all variables related to a causal problem interact. [4/8]
June 10, 2025 at 9:33 AM
The challenge:

However, due to confounding factors and small sample sizes, causal information is difficult to extract from observational data without strict additional assumptions such as a known, fixed causal graph or the unconfoundedness assumption. [3/8]
June 10, 2025 at 9:33 AM
The premise:

Causal questions, such as “What will be the effect of a medication?” are typically addressed in carefully conducted experiments. While controlled experiments can be expensive or even impossible, passively observed data is often readily available. [2/8]
June 10, 2025 at 9:33 AM
Great paper!
February 7, 2025 at 10:00 AM