Ben Grimmer
banner
profgrimmer.bsky.social
Ben Grimmer
@profgrimmer.bsky.social
Assistant Professor @JohnsHopkinsAMS, Works in Mathematical Optimization,
Mostly here to share pretty maths/3D prints, sometimes sharing my research
Enjoyed being part of the Brin Mathematical Research Center's summer school on Scientific Machine Learning last week. Many very good talks and always nice to visit UMD!
August 14, 2025 at 8:29 PM
You'll have to read the paper if you want the maths defining these extremal smoothings for any sublinear function and convex cone. I now have a whole family of optimal smoothing Russian nesting dolls living in my office.

Enjoy: arxiv.org/abs/2508.06681
August 12, 2025 at 2:40 PM
If instead, you wanted the optimal outer smoothings (ie, sets containing K), there is a similar spectrum of optimal smoothings being everything between the minimal and maximal sets shown below.
August 12, 2025 at 2:40 PM
If we restrict to looking at inner smoothings (ie, subsets of K), it turns out there are infinitely many sets attaining the optimal level of smoothness. Our theory identifies that there is a minimal and maximal such smoothing, shown below (nesting dolls from before).
August 12, 2025 at 2:40 PM
To do something more nontrivial, consider the exponential cone K={(x,y,z) | z >= y exp(x/y)}, which is foundational to geometric programming. The question: What is the smoothest set differing from this cone by at distance one anywhere?
My 3D print of this cone is below :)
August 12, 2025 at 2:40 PM
For example, you could invent many smoothings of the two-norm (five given below). In this case, the Moreau envelope gives the optimal outer smoothing. If you wanted the best smoothing of the second-order cone (the epigraph of the two-norm) a different smoothing is optimal.
August 12, 2025 at 2:40 PM
📢 Excited to share a new paper with PhD student Thabo Samakhoana. Nonsmooth optimization often uses smoothings, nearby smooth functions or sets. Often chosen in an ad hoc fashion.

We do away with ad hoc, characterizing optimal smoothings for convex cones and sublinear functions
August 12, 2025 at 2:40 PM
In honor of the fun I've had playing with this puzzle and property, a homemade, ocean-themed, ceramic p=4/3 norm ball. Enjoy!
August 5, 2025 at 2:37 PM
As a cruel mathematician, I leave the task of verifying that the p=4/3 rotated appropriately fully and perfectly plugs a hole (equivalently has a perfect circle as a shadow) as an exercise to the reader.

The dual of this wonderful property is that the 4-norm hides a circle :)
August 5, 2025 at 2:37 PM
For good measure, one extra round of this physical verification process, adding a purple ball fully blocks our view of the green ball, entirely plugging the hole and saving our lives yet again.
August 5, 2025 at 2:37 PM
Don't believe me? We can put another green p=4/3-norm ball in the glass. Looking from above, you cannot see any of the blue ball past the green one. It entirely plugs the hole!
August 5, 2025 at 2:37 PM
To demonstrate this, suppose my glass is the hole in our boat, we can plug it entirely by placing a blue 4/3-norm ball in the glass.
August 5, 2025 at 2:37 PM
The solution to cork the hole is surprisingly, radically simple:
Just put the p=4/3 norm ball in the hole.
Appropriately rotated, sending the direction (1,1,1)/sqrt{3} to (0,0,1).
August 5, 2025 at 2:37 PM
Yesterday I posted a maths puzzle that AIs all failed at (thanks for running the premium versions @xy-han.bsky.social and Ernest Ryu). The puzzle just needs elementary reasoning about p-norm balls (third row on my shelf below).

This thread gives the puzzle, solution, and a 3D printed demo :)
August 5, 2025 at 2:37 PM
My PhD students are awesome. They gave my fiancee(wife) and I this gorgeous cherry blossom card for our wedding and soon honeymoon in Japan <3
March 15, 2025 at 12:54 PM
From top row to bottom, Figure 0 above has Schatten p-norms, Vector p-norms, Function p-norms, CVAR norms, Their duals, OWL norms, Their duals.

Figure 1 on the other side of my office has induced p->q matrix norm balls. p goes 1 to inf left to right. q goes 1 to inf bottom to top.
March 12, 2025 at 7:52 PM
As an early wedding present (happening this Saturday!), my dad made me a custom shelf to hold my collection of unit norm balls!

Rockafellar+Wets's thick textbook is included for reference.
March 12, 2025 at 7:52 PM
This is all modeled with ideas of Holder smoothness and uniform convexity
Much to our surprise, we give "simple" optimal rates that look just like classic accelerated smooth (strongly) convex rates by *very* carefully aggregating all the heterogeneous structures!

Link: arxiv.org/abs/2503.07566
March 11, 2025 at 2:00 PM
New (first) paper with my student Aaron Zoll :)
We consider first-order methods for a ridiculously general model: minimizing a convex composition of functions g_j(x) that vary heterogeneously in whether they are smooth, nonsmooth, convex, strongly convex or anything in between.
March 11, 2025 at 2:00 PM
PhD students set up arts and crafts to make Valentine's mailboxes and collect cards. They (slide) rule :)
February 15, 2025 at 9:01 PM
Newest office addition might be the biggest computer in my department! (Assuming compute is measured by length)
January 29, 2025 at 12:17 PM
Continuing to use January's freedom, some exposition on OWL Norms:
Their unit balls are all Catalan solids (every face is the same). So the dual balls are all Archimedean solids (every corner is the same)
www.ams.jhu.edu/~grimmer/OWL...
Files to make your own: www.printables.com/model/113805...
January 8, 2025 at 1:57 PM
The way computers store real numbers (floating point) is exactly the same as how our grandparents did math mechanically, slide rules and log scales.

I'm mass producing binary slide rules to give students on day one of my "Intro to Computational Math" this Spring :)
December 29, 2024 at 9:04 PM
SPGM (and a variant using only a limited memory of old gradients) is surprisingly cheap to implement, only requiring solving a low-dim convex problem to plan each dynamically minimax optimal step.

Numerical it keeps up with BFGS(!) while sporting stronger theoretical guarantees
December 10, 2024 at 3:02 PM
Our "Subgame Perfect Gradient Method" attains not only the best worst-case over all smooth convex problems but, at every iteration, the best worst-case over all smooth convex problems agreeing with the first-order info seen so far.

For x^2, it solves exactly in two steps.
December 10, 2024 at 3:02 PM