Christoph Molnar
christophmolnar.bsky.social
Christoph Molnar
@christophmolnar.bsky.social
Author of Interpretable Machine Learning and other books

Newsletter: https://mindfulmodeler.substack.com/
Website: https://christophmolnar.com/
Pinned
Interested in machine learning in science?

Timo and I recently published a book, and even if you are not a scientist, you'll find useful overviews of topics like causality and robustness.

The best part is that you can read it for free: ml-science-book.com
Using feature importance to interpret your models?

This paper might be of interest to you. Papers by @gunnark.bsky.social are always worth checking out.
In many XAI applications, it is crucial to determine whether features contribute individually or only when combined. However, existing methods fail to reveal cooperations since they entangle individual contributions with those made via interactions and dependencies. We show how to disentangle them!
July 8, 2025 at 7:13 AM
Read more in my latest blog post: mindfulmodeler.substack.com/p/whos-reall...
Who’s Really Making the Decisions?
LLMs, tariffs, and the silent takeover of decisions
mindfulmodeler.substack.com
April 8, 2025 at 12:56 PM
My stock portfolio is deep in the red, and tariffs by the Trump admin might be the cause. Could an LLM have been used to calculate them? It made me rethink how LLMs shape decisions, from big global-economy-wrecking ones to everyday decisions.
Who’s Really Making the Decisions?
LLMs, tariffs, and the silent takeover of decisions
mindfulmodeler.substack.com
April 8, 2025 at 12:56 PM
SHAP interpretations depend on background data — change the data, change the explanation. A critical but often overlooked issue in model interpretability.

Read more:
SHAP Interpretations Depend on Background Data — Here’s Why
Or why height doesn't matter in the NBA
mindfulmodeler.substack.com
April 1, 2025 at 2:52 PM
I recently joined The AI Fundamentalists with my co-author Timo Freiesleben to discuss our book Supervised Machine Learning for Science. We explored how scientists can leverage ML while maintaining rigor and embedding domain knowledge.
Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 1 - The AI Fundamentalists
Machine learning is transforming scientific research across disciplines, but many scientists remain skeptical about using approaches that focus on prediction over causal understanding. That’s why...
www.buzzsprout.com
March 28, 2025 at 9:24 AM
The 3rd edition of Interpretable Machine Learning is out! 🎉 Major cleanup, better examples, and new chapters on Data & Models, Interpretability Goals, Ceteris Paribus, and LOFO Importance.

The book remains free to read for everyone. But you can also buy ebook or paperback.
March 13, 2025 at 12:09 PM
Has anyone seen Counterfactual Explanations for machine learning models somewhere in the wild?

They are often discussed in research papers, but I have yet to see them being used somewhere in an actual process or product.
March 12, 2025 at 1:59 PM
I've found the Quarto documentation to be really good, especially the search: quarto.org
Quarto
An open source technical publishing system for creating beautiful articles, websites, blogs, books, slides, and more. Supports Python, R, Julia, and JavaScript.
quarto.org
March 12, 2025 at 10:11 AM
It's still hard to predict for me when it fails. For example, told it to simply check for placements of citations in a markdown file, which should be doable with a regex. And Claude failed. But a similar task worked out the other day.
March 12, 2025 at 10:10 AM
Trying Claude Code for some tasks. Paradoxically, it's most expensive when it doesn't work because it fails, then tries a couple of times again, burning through tokens.

So sometimes it's 20 cents for saving you 20 minutes of work.

Other times it's $1 for wasting 10 minutes.
March 12, 2025 at 9:06 AM
Only waiting for the print proof, but if it looks good, I'll publish the third edition of Interpretable Machine Learning next week.

As always, it was more work than anticipated—especially moving the entire book project from Bookdown to Quarto, which took a bit of effort.
March 7, 2025 at 10:26 AM
Can an office game outperform machine learning?

My most recent post on Mindful Modeler dives into the wisdom of the crowds and prediction markets.

Read the full story here:
Can an office game outperform machine learning?
Wisdom of the crowds, prediction markets, and more fun in the work place.
buff.ly
February 25, 2025 at 2:42 PM
5/ It was stressful, but I don’t regret it. I learned a lot and definitely feel validated in my skills again.

Full story & solution details: https://buff.ly/4gHZYHD
How to win an ML competition beyond predictive performance
A dive into the challenges and winning solution
mindfulmodeler.substack.com
February 4, 2025 at 1:18 PM
4/ Writing Supervised ML for Science at the same time was a huge plus—competition & book writing fed into each other (e.g., uncertainty quantification).
February 4, 2025 at 1:18 PM
3/ One key insight: SHAP’s reference data matters! I used historical forecasts for interpretability. Also combined SHAP with ceteris paribus profiles for sensitivity analysis.
February 4, 2025 at 1:18 PM
2/ My approach:
✅ XGBoost ensemble, quantile loss
✅ SHAP for explainability + custom waterfall plots + ceteris paribus plots
✅ Conformal prediction to fix interval coverage
✅ Auto-generated reports with Quarto
February 4, 2025 at 1:18 PM
1/ Years ago, I went full-time into writing & cut ML practice. At some point, I felt like an impostor writing about ML but no longer practicing. This competition about water supply forecasting on DrivenData (500k prize pool) was a way back in.
February 4, 2025 at 1:18 PM
A year ago, I took a risk & spent quite some time on a ML competition. It paid off—I won 4th place overall & 1st in explainability!

Here's a summary of the journey, challenges, & key insights from my winning solution (water supply forecasting)
February 4, 2025 at 1:18 PM
OpenAI right now
January 29, 2025 at 9:51 AM
deprecated was maybe the wrong word. It's no longer the default in the shap package. There are faster alternatives
January 22, 2025 at 12:39 PM
The connection between SHAP and LIME is only when we represent features differently for LIME and use a different weight function.
My take is that, while interesting, it can be misleading as SHAP and original LIME are very different, as you also say.
January 22, 2025 at 10:32 AM
The original SHAP paper has been cited over 30k times.

The paper showed that attribution methods, like LIME and LRP, compute Shapley values (with some adaptations).

The paper also introduces estimation methods for Shapley values, like KernelSHAP, which today is deprecated.
January 22, 2025 at 8:15 AM
Not planned so far
January 21, 2025 at 4:20 PM
To this day, the Interpretable Machine Learning book is still my most impactful project. But as time went on, I dreaded working on it. Fortunately, I found the motivation again and I'm working on the 3rd edition. 😁

Read more here:
Why I almost stopped working on Interpretable Machine Learning
7 years ago I started writing the book Interpretable Machine Learning.
buff.ly
January 21, 2025 at 2:38 PM