Christoph Molnar
@christophmolnar.bsky.social
Author of Interpretable Machine Learning and other books
Newsletter: https://mindfulmodeler.substack.com/
Website: https://christophmolnar.com/
Newsletter: https://mindfulmodeler.substack.com/
Website: https://christophmolnar.com/
Pinned
Interested in machine learning in science?
Timo and I recently published a book, and even if you are not a scientist, you'll find useful overviews of topics like causality and robustness.
The best part is that you can read it for free: ml-science-book.com
Timo and I recently published a book, and even if you are not a scientist, you'll find useful overviews of topics like causality and robustness.
The best part is that you can read it for free: ml-science-book.com
Using feature importance to interpret your models?
This paper might be of interest to you. Papers by @gunnark.bsky.social are always worth checking out.
This paper might be of interest to you. Papers by @gunnark.bsky.social are always worth checking out.
In many XAI applications, it is crucial to determine whether features contribute individually or only when combined. However, existing methods fail to reveal cooperations since they entangle individual contributions with those made via interactions and dependencies. We show how to disentangle them!
July 8, 2025 at 7:13 AM
Using feature importance to interpret your models?
This paper might be of interest to you. Papers by @gunnark.bsky.social are always worth checking out.
This paper might be of interest to you. Papers by @gunnark.bsky.social are always worth checking out.
My stock portfolio is deep in the red, and tariffs by the Trump admin might be the cause. Could an LLM have been used to calculate them? It made me rethink how LLMs shape decisions, from big global-economy-wrecking ones to everyday decisions.
Who’s Really Making the Decisions?
LLMs, tariffs, and the silent takeover of decisions
mindfulmodeler.substack.com
April 8, 2025 at 12:56 PM
My stock portfolio is deep in the red, and tariffs by the Trump admin might be the cause. Could an LLM have been used to calculate them? It made me rethink how LLMs shape decisions, from big global-economy-wrecking ones to everyday decisions.
SHAP interpretations depend on background data — change the data, change the explanation. A critical but often overlooked issue in model interpretability.
Read more:
Read more:
SHAP Interpretations Depend on Background Data — Here’s Why
Or why height doesn't matter in the NBA
mindfulmodeler.substack.com
April 1, 2025 at 2:52 PM
SHAP interpretations depend on background data — change the data, change the explanation. A critical but often overlooked issue in model interpretability.
Read more:
Read more:
I recently joined The AI Fundamentalists with my co-author Timo Freiesleben to discuss our book Supervised Machine Learning for Science. We explored how scientists can leverage ML while maintaining rigor and embedding domain knowledge.
Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 1 - The AI Fundamentalists
Machine learning is transforming scientific research across disciplines, but many scientists remain skeptical about using approaches that focus on prediction over causal understanding. That’s why...
www.buzzsprout.com
March 28, 2025 at 9:24 AM
I recently joined The AI Fundamentalists with my co-author Timo Freiesleben to discuss our book Supervised Machine Learning for Science. We explored how scientists can leverage ML while maintaining rigor and embedding domain knowledge.
The 3rd edition of Interpretable Machine Learning is out! 🎉 Major cleanup, better examples, and new chapters on Data & Models, Interpretability Goals, Ceteris Paribus, and LOFO Importance.
The book remains free to read for everyone. But you can also buy ebook or paperback.
The book remains free to read for everyone. But you can also buy ebook or paperback.
March 13, 2025 at 12:09 PM
The 3rd edition of Interpretable Machine Learning is out! 🎉 Major cleanup, better examples, and new chapters on Data & Models, Interpretability Goals, Ceteris Paribus, and LOFO Importance.
The book remains free to read for everyone. But you can also buy ebook or paperback.
The book remains free to read for everyone. But you can also buy ebook or paperback.
Has anyone seen Counterfactual Explanations for machine learning models somewhere in the wild?
They are often discussed in research papers, but I have yet to see them being used somewhere in an actual process or product.
They are often discussed in research papers, but I have yet to see them being used somewhere in an actual process or product.
March 12, 2025 at 1:59 PM
Has anyone seen Counterfactual Explanations for machine learning models somewhere in the wild?
They are often discussed in research papers, but I have yet to see them being used somewhere in an actual process or product.
They are often discussed in research papers, but I have yet to see them being used somewhere in an actual process or product.
Trying Claude Code for some tasks. Paradoxically, it's most expensive when it doesn't work because it fails, then tries a couple of times again, burning through tokens.
So sometimes it's 20 cents for saving you 20 minutes of work.
Other times it's $1 for wasting 10 minutes.
So sometimes it's 20 cents for saving you 20 minutes of work.
Other times it's $1 for wasting 10 minutes.
March 12, 2025 at 9:06 AM
Trying Claude Code for some tasks. Paradoxically, it's most expensive when it doesn't work because it fails, then tries a couple of times again, burning through tokens.
So sometimes it's 20 cents for saving you 20 minutes of work.
Other times it's $1 for wasting 10 minutes.
So sometimes it's 20 cents for saving you 20 minutes of work.
Other times it's $1 for wasting 10 minutes.
Only waiting for the print proof, but if it looks good, I'll publish the third edition of Interpretable Machine Learning next week.
As always, it was more work than anticipated—especially moving the entire book project from Bookdown to Quarto, which took a bit of effort.
As always, it was more work than anticipated—especially moving the entire book project from Bookdown to Quarto, which took a bit of effort.
March 7, 2025 at 10:26 AM
Only waiting for the print proof, but if it looks good, I'll publish the third edition of Interpretable Machine Learning next week.
As always, it was more work than anticipated—especially moving the entire book project from Bookdown to Quarto, which took a bit of effort.
As always, it was more work than anticipated—especially moving the entire book project from Bookdown to Quarto, which took a bit of effort.
Can an office game outperform machine learning?
My most recent post on Mindful Modeler dives into the wisdom of the crowds and prediction markets.
Read the full story here:
My most recent post on Mindful Modeler dives into the wisdom of the crowds and prediction markets.
Read the full story here:
Can an office game outperform machine learning?
Wisdom of the crowds, prediction markets, and more fun in the work place.
buff.ly
February 25, 2025 at 2:42 PM
Can an office game outperform machine learning?
My most recent post on Mindful Modeler dives into the wisdom of the crowds and prediction markets.
Read the full story here:
My most recent post on Mindful Modeler dives into the wisdom of the crowds and prediction markets.
Read the full story here:
A year ago, I took a risk & spent quite some time on a ML competition. It paid off—I won 4th place overall & 1st in explainability!
Here's a summary of the journey, challenges, & key insights from my winning solution (water supply forecasting)
Here's a summary of the journey, challenges, & key insights from my winning solution (water supply forecasting)
February 4, 2025 at 1:18 PM
A year ago, I took a risk & spent quite some time on a ML competition. It paid off—I won 4th place overall & 1st in explainability!
Here's a summary of the journey, challenges, & key insights from my winning solution (water supply forecasting)
Here's a summary of the journey, challenges, & key insights from my winning solution (water supply forecasting)
OpenAI right now
January 29, 2025 at 9:51 AM
OpenAI right now
The original SHAP paper has been cited over 30k times.
The paper showed that attribution methods, like LIME and LRP, compute Shapley values (with some adaptations).
The paper also introduces estimation methods for Shapley values, like KernelSHAP, which today is deprecated.
The paper showed that attribution methods, like LIME and LRP, compute Shapley values (with some adaptations).
The paper also introduces estimation methods for Shapley values, like KernelSHAP, which today is deprecated.
January 22, 2025 at 8:15 AM
The original SHAP paper has been cited over 30k times.
The paper showed that attribution methods, like LIME and LRP, compute Shapley values (with some adaptations).
The paper also introduces estimation methods for Shapley values, like KernelSHAP, which today is deprecated.
The paper showed that attribution methods, like LIME and LRP, compute Shapley values (with some adaptations).
The paper also introduces estimation methods for Shapley values, like KernelSHAP, which today is deprecated.
To this day, the Interpretable Machine Learning book is still my most impactful project. But as time went on, I dreaded working on it. Fortunately, I found the motivation again and I'm working on the 3rd edition. 😁
Read more here:
Read more here:
Why I almost stopped working on Interpretable Machine Learning
7 years ago I started writing the book Interpretable Machine Learning.
buff.ly
January 21, 2025 at 2:38 PM
To this day, the Interpretable Machine Learning book is still my most impactful project. But as time went on, I dreaded working on it. Fortunately, I found the motivation again and I'm working on the 3rd edition. 😁
Read more here:
Read more here:
How I sometimes feel working on "traditional" machine learning topics instead of generative AI stuff 😂
January 14, 2025 at 7:41 AM
How I sometimes feel working on "traditional" machine learning topics instead of generative AI stuff 😂
It's quite ironic how people who built the best prediction models are such bad predictors themselves.
They throw all their knowledge about how to make good predictions overboard and just claim things like: AI will replace radiologists in a few years or when they expect AGI.
They throw all their knowledge about how to make good predictions overboard and just claim things like: AI will replace radiologists in a few years or when they expect AGI.
December 17, 2024 at 7:55 AM
It's quite ironic how people who built the best prediction models are such bad predictors themselves.
They throw all their knowledge about how to make good predictions overboard and just claim things like: AI will replace radiologists in a few years or when they expect AGI.
They throw all their knowledge about how to make good predictions overboard and just claim things like: AI will replace radiologists in a few years or when they expect AGI.
The problem with all these AI demos (especially for image and video generation):
They are the most impressive, cherry-picked examples. That includes cherry-picking prompts and themes that produced better results.
But as a user, you want good results for every prompt/theme relevant to your use case
They are the most impressive, cherry-picked examples. That includes cherry-picking prompts and themes that produced better results.
But as a user, you want good results for every prompt/theme relevant to your use case
December 17, 2024 at 7:50 AM
The problem with all these AI demos (especially for image and video generation):
They are the most impressive, cherry-picked examples. That includes cherry-picking prompts and themes that produced better results.
But as a user, you want good results for every prompt/theme relevant to your use case
They are the most impressive, cherry-picked examples. That includes cherry-picking prompts and themes that produced better results.
But as a user, you want good results for every prompt/theme relevant to your use case
Looking for a Christmas gift for a stubborn Bayesian or an over-hyped AI enthusiast?
Modeling Mindsets is a short read to broaden your perspective on data modeling.
christophmolnar.com/books/modeli...
*Hat not included.
Modeling Mindsets is a short read to broaden your perspective on data modeling.
christophmolnar.com/books/modeli...
*Hat not included.
December 13, 2024 at 9:42 AM
Looking for a Christmas gift for a stubborn Bayesian or an over-hyped AI enthusiast?
Modeling Mindsets is a short read to broaden your perspective on data modeling.
christophmolnar.com/books/modeli...
*Hat not included.
Modeling Mindsets is a short read to broaden your perspective on data modeling.
christophmolnar.com/books/modeli...
*Hat not included.
My personal rules for AI-assisted writing:
• Use AI only for small and specific stuff, like grammar fixes or making suggestions for factual corrections.
• Never let an LLM change voice and tone.
• I review any changes made by AI.
• Use AI only for small and specific stuff, like grammar fixes or making suggestions for factual corrections.
• Never let an LLM change voice and tone.
• I review any changes made by AI.
December 13, 2024 at 9:21 AM
My personal rules for AI-assisted writing:
• Use AI only for small and specific stuff, like grammar fixes or making suggestions for factual corrections.
• Never let an LLM change voice and tone.
• I review any changes made by AI.
• Use AI only for small and specific stuff, like grammar fixes or making suggestions for factual corrections.
• Never let an LLM change voice and tone.
• I review any changes made by AI.
What a sad timeline, where vaccines — one of medicine's clearest wins with all upside and minimal downside — have become targets.
Can't we have like an anti-knee arthroscopy movement or whatever instead?
Can't we have like an anti-knee arthroscopy movement or whatever instead?
December 13, 2024 at 9:02 AM
What a sad timeline, where vaccines — one of medicine's clearest wins with all upside and minimal downside — have become targets.
Can't we have like an anti-knee arthroscopy movement or whatever instead?
Can't we have like an anti-knee arthroscopy movement or whatever instead?
Citing a non-deterministic, "hallucinating", and non-reproducible LLM output is wild.
While the norms and best practices are evolving, citing them seems the wrong way.
(even wilder when some people add "ChatGPT" as their co-authors)
While the norms and best practices are evolving, citing them seems the wrong way.
(even wilder when some people add "ChatGPT" as their co-authors)
One of the weirder scholarly practices regarding generative AI that seems to have been normalized is citing chatbots.
I say normalized because many univs & scholarly associations recommend it as an element of proper scholarship.
But it doesn't make sense when you consider what a citation means. 1/
I say normalized because many univs & scholarly associations recommend it as an element of proper scholarship.
But it doesn't make sense when you consider what a citation means. 1/
December 4, 2024 at 7:04 AM
Citing a non-deterministic, "hallucinating", and non-reproducible LLM output is wild.
While the norms and best practices are evolving, citing them seems the wrong way.
(even wilder when some people add "ChatGPT" as their co-authors)
While the norms and best practices are evolving, citing them seems the wrong way.
(even wilder when some people add "ChatGPT" as their co-authors)
What are shapley interactions and why should you care about them?
This is a guest post by Julia, Max, Fabian and Hubert on my newsletter Mindful Modeler.
I also learned a lot from this post and definitely recommend checking out the shapiq package.
mindfulmodeler.substack.com/p/what-are-s...
This is a guest post by Julia, Max, Fabian and Hubert on my newsletter Mindful Modeler.
I also learned a lot from this post and definitely recommend checking out the shapiq package.
mindfulmodeler.substack.com/p/what-are-s...
What Are Shapley Interactions, and Why Should You Care?
A guest post by Julia, Max, Fabian and Hubert.
mindfulmodeler.substack.com
December 3, 2024 at 3:45 PM
What are shapley interactions and why should you care about them?
This is a guest post by Julia, Max, Fabian and Hubert on my newsletter Mindful Modeler.
I also learned a lot from this post and definitely recommend checking out the shapiq package.
mindfulmodeler.substack.com/p/what-are-s...
This is a guest post by Julia, Max, Fabian and Hubert on my newsletter Mindful Modeler.
I also learned a lot from this post and definitely recommend checking out the shapiq package.
mindfulmodeler.substack.com/p/what-are-s...
Is anyone aware of a completely AI-generated book that people actually read?
Excluding books that are dedicated "AI experiments" and where the book is more about the experiment.
Also excluding AI-assisted books where generative AI played a minor role
Excluding books that are dedicated "AI experiments" and where the book is more about the experiment.
Also excluding AI-assisted books where generative AI played a minor role
November 29, 2024 at 10:33 AM
Is anyone aware of a completely AI-generated book that people actually read?
Excluding books that are dedicated "AI experiments" and where the book is more about the experiment.
Also excluding AI-assisted books where generative AI played a minor role
Excluding books that are dedicated "AI experiments" and where the book is more about the experiment.
Also excluding AI-assisted books where generative AI played a minor role
The unofficial GIF-based pandas library documentation.
pandas.DataFrame.rolling
pandas.DataFrame.rolling
a panda bear is laying down in the grass .
Alt: a panda bear is rolling down in the grass. It's a side-ways roll, hlding some type of object. I give 10/10.
media.tenor.com
November 27, 2024 at 2:38 PM
The unofficial GIF-based pandas library documentation.
pandas.DataFrame.rolling
pandas.DataFrame.rolling
Without non-linear activation functions, neural networks would be linear models, no matter how many layers are stacked.
November 27, 2024 at 2:21 PM
Without non-linear activation functions, neural networks would be linear models, no matter how many layers are stacked.
Got myself a Samsung Galaxy S9 tablet for note-taking, and I love it.
(and yes, I'm reading my own book here as a reference for another project, feeling like an imposter because I don't have everything memorized 😂)
(and yes, I'm reading my own book here as a reference for another project, feeling like an imposter because I don't have everything memorized 😂)
November 27, 2024 at 9:23 AM
Got myself a Samsung Galaxy S9 tablet for note-taking, and I love it.
(and yes, I'm reading my own book here as a reference for another project, feeling like an imposter because I don't have everything memorized 😂)
(and yes, I'm reading my own book here as a reference for another project, feeling like an imposter because I don't have everything memorized 😂)