#machinelearning #deeplearning #probability #statistics #optimization #sampling
Joint work with Yunrui Guan and @shiqianma.bsky.social
Joint work with Yunrui Guan and @shiqianma.bsky.social
We also introduce multiplier bootstrap bounds for obtaining finite-sample valid, data-driven confidence intervals.
We also introduce multiplier bootstrap bounds for obtaining finite-sample valid, data-driven confidence intervals.
Using Malliavin-Stein method we establish Gaussian Approximation bounds for these estimators.
Using Malliavin-Stein method we establish Gaussian Approximation bounds for these estimators.
Bottom-line: Time to compare SGD-trained NNs with RF and not kernel methods!
Bottom-line: Time to compare SGD-trained NNs with RF and not kernel methods!
However, under MSP, greedy RFs are provably better that SGD-trained 2-NNs!
However, under MSP, greedy RFs are provably better that SGD-trained 2-NNs!
arxiv.org/abs/2411.04394
we show that If the true regression function satisfies MSP, greedy training works well with 𝑂(log 𝑑) samples.
Otherwise, it struggles.
This settles the question of learnability for greedy recursive partitioning algorithms like CART.
arxiv.org/abs/2411.04394
we show that If the true regression function satisfies MSP, greedy training works well with 𝑂(log 𝑑) samples.
Otherwise, it struggles.
This settles the question of learnability for greedy recursive partitioning algorithms like CART.
But how do neural nets compare with random forest (RF) trained using greedy algorithms like CART?
But how do neural nets compare with random forest (RF) trained using greedy algorithms like CART?
🙋
🙋