Martin Mundt
martinmundt.bsky.social
Martin Mundt
@martinmundt.bsky.social
Professor for Lifelong Machine Learning @ Uni Bremen | OWL-ML Lab: https://owl-ml.com | Board @ ContinualAI | QueerInAI | CoLLAs 2026 Program Chair | He/him 🏳️‍🌈🇪🇺
Wonder how LLMs learn over long time horizons & how hate-checks deal with time?

Look at our new work "Chronoberg": an open-source dataset spanning 250 years of books with analysis of shifts in meaning & continual learning of LLMs:

arxiv.org/pdf/2509.22360

huggingface.co/datasets/spa...
October 6, 2025 at 9:42 AM
🎉"Aligning generalization between humans and machines" (w/ 25 incredible authors) is out now in #Nature Machine Intelligence: www.nature.com/articles/s42...

In short, we identified interdisciplinary commonalities & differences with respect to notions of, methods for, & evaluation of generalization
September 16, 2025 at 7:15 AM
Feeling really inspired after @collasconf.bsky.social: wonderful keynotes, exceptionally talented early-career spotlights & interactions that mattered!
And of course our own oral presentation as well :)

Very excited to be the next program chair & ensure its success for CoLLAs in Romania next year!
August 19, 2025 at 7:04 AM
Just arrived in Philadelphia for @collasconf.bsky.social

Very excited about a fantastic program!

I’ll be around all week, but feel free to drop me a message or catch me directly at the debate on Monday morning, after our oral presentation on Tuesday, or in any of the poster sessions
August 11, 2025 at 12:54 AM
"Reasonable AI" got selected as a cluster of excellence www.tu-darmstadt.de/universitaet...

Overwhelmingly happy to be part of RAI & continue working with the smart minds at TU Darmstadt & hessian.AI, while also seeing my new home at Uni Bremen achieve a historic success in the excellence strategy!
May 23, 2025 at 11:57 AM
You may know batch-norm, but did you know it is a lightweight catalyst for open world learning?

In "BOWL: A Deceptively Simple Open World Learner" we leverage BN stats to enable OoD detection & rapid active + continual learning!

arxiv.org/abs/2402.04814

🚀 now accepted @collasconf.bsky.social
May 14, 2025 at 12:03 PM
Horizontal, vertical, hybrid data partitioning: heterogeneity is tough to handle in federated learning!

🔥In “Scaling Probabilistic Circuits via Data Partitioning" - accepted at #UAI25 - we unify the different settings through aggregation of learned client distributions: arxiv.org/abs/2503.08141
May 8, 2025 at 7:19 AM
🔥Our work “Where is the Truth? The Risk of Getting Confounded in a Continual World" was accepted with a spotlight poster at ICML!
arxiv.org/abs/2402.06434

-> we introduce continual confounding + the ConCon dataset, where confounders over time render continual knowledge accumulation insufficient ⬇️
May 2, 2025 at 9:48 AM
If you are attending #AAAI25 make sure to take part in our 3rd Continual Causality Bridge, tomorrow Feb 25th.

I can’t travel myself due to family medical reasons 😢, but we have an exciting program with amazing speakers on #continuallearning & #causality:

www.continualcausality.org/program/
February 24, 2025 at 1:54 PM
Why has continual ML not had its breakthrough yet?

In our new collaborative paper w/ many amazing authors, we argue that “Continual Learning Should Move Beyond Incremental Classification”!

We highlight 5 examples to show where CL algos can fail & pinpoint 3 key challenges

arxiv.org/abs/2502.11927
February 18, 2025 at 1:33 PM
At each of the 5 life-cycle stages, we use various recent examples to make these parallels apparent.

In turn, they allow us to distill a set of first technical considerations & recommendations to initiate the next wave of research to combat some of the worst pitfalls. 3/4
February 6, 2025 at 2:38 PM
We first re-conceptualize the metaphor to sourcing of ingredients (data), conception of recipes (instructions), the baking process (training), and the tasting & selling of the cake (evaluation & distribution). We then highlight how statistical ML assumptions underpin these. 2/4
February 6, 2025 at 2:38 PM
🍰Super proud of our newest work:
“The Cake that is Intelligence and Who Gets to Bake it: An AI Analogy and its Implications for Participation” - arxiv.org/abs/2502.03038

🍰We extend Yann LeCun’s AI cake analogy to relate socio-technical outcomes to the AI life-cycle!

More details below ⬇️ 1/4
February 6, 2025 at 2:38 PM
Is generalisation a process, an operation, or a product? 🤨

Read about the different ways generalisation is defined, parallels between humans & machines, methods & evaluation in our new paper: arxiv.org/abs/2411.15626

co-authored with many smart minds as a product of Dagstuhl 🙏🎉
November 27, 2024 at 1:30 PM
I’m hiring 2 fully-funded PhD students to join our new Lifelong ML lab at the University of Bremen, where our mission is to make ML/AI more sustainable, adaptive, robust & inclusive.

App deadline Dec. 10: www.uni-bremen.de/en/universit...

Please share or reach out!

#MachineLearning #AI #PhD
November 19, 2024 at 1:42 PM