Kasper Green Larsen
kasperglarsen.bsky.social
Kasper Green Larsen
@kasperglarsen.bsky.social
Professor and Head of Algorithms, Data Structures and Foundations of Machine Learning at Computer Science, Aarhus University
Almost tight generalisation bounds for large margin voting classifiers and an optimal Majority-of-3-AdaBoosts weak-to-strong learner.

Accepted at COLT'25 🥳

arXiv: arxiv.org/pdf/2502.16462
May 5, 2025 at 5:46 PM
An almost tight understanding of AdaBoost's generalisation, a proof that Majority-of-3-AdaBoosts is an optimal weak-to-strong learner in expectation and better margin-generalisation for voting classifiers.

New preprint. And as mentioned yesterday, Mikael is on the job market 😉
February 25, 2025 at 6:27 AM
And Mikael presenting his second student paper at ALT’25 💪
February 24, 2025 at 2:12 PM
Arthur about to present our paper on sample compression schemes at ALT’25. He is also on the job market and is amazing as well!
February 24, 2025 at 1:54 PM
Proud advisor 🥹 Mikael presenting his single-authored paper at ALT’25. He is an amazing student and is on the post doc job market (hint, hint 😉)
February 24, 2025 at 8:24 AM
Finally completely tight margin-based generalisation bounds for halfspaces. Very cool proof combining random discretizations with Rademacher complexity in a highly non-trivial way.

New arXiv preprint: arxiv.org/abs/2502.13692
February 20, 2025 at 9:42 AM
Using multi-color discrepancy to prove lower bounds for fair allocations of indivisible items among agents.

New arXiv preprint: arxiv.org/abs/2502.10516
February 18, 2025 at 6:49 AM
How many queries does it take to estimate the volume of the union of a set of geometric objects? We now have a tight lower bound and improved bounds for the special Klee's measure problem.
Accepted at SoCG'25: Symp. on Computational Geometry.

arxiv.org/abs/2410.00996
February 7, 2025 at 6:51 AM
Significantly improved sample complexity of replicable boosting via a majority of smooth-boosters.

New arxiv preprint: arxiv.org/abs/2501.18388
January 31, 2025 at 6:29 AM
A neat framework for analyzing learning algorithms based on sub-sampling the training data, with applications in boosting!

arxiv.org/abs/2402.02976

Accepted at ALT25 🥳
December 19, 2024 at 6:40 AM
In case you missed my awesome post doc Arthur da Cunha's Oral Presentation of our "Optimal Parallelization of Boosting" at #NeurIPS2024, I recorded a (slightly extended) version here.

youtu.be/BGZJMwhQc4U
December 13, 2024 at 5:53 PM