Keenan Crane
banner
keenancrane.bsky.social
Keenan Crane
@keenancrane.bsky.social
Digital Geometer, Associate Professor of Computer Science & Robotics at Carnegie Mellon University. There are four lights.
https://www.cs.cmu.edu/~kmcrane/
Godspeed Michael.
November 5, 2025 at 3:11 AM
Hit me back in about 400k years*.

(*Approximate age of man-made fire.)
October 28, 2025 at 9:10 PM
…point of a flow: it’s obtained by minimizing the (square of the) “enclosed volume”, plus a regularity term that prevents self-intersections.

So, when gradient flows are concatenated, the eversion follows a “U” in the energy landscape rather than a “∩”
October 22, 2025 at 5:26 AM
I didn’t look much into the history of midsurfaces for this eversion, but am curious to know what has been said by Gardner and others. This one is different in spirit from midsurfaces used for sphere eversion (like Kusner’s halfway surface) in the sense that it’s a stable rather than unstable…
October 22, 2025 at 5:23 AM
Very happy to see that NVIDIA is still making demos. 🟩👁️
October 22, 2025 at 5:06 AM
Out of curiosity, did you consider (or try) GLB/GLTF? (Also supported by Finder viewer.)
October 21, 2025 at 3:15 PM
Holy crap. What? Why? Who did that…? Amazing.
October 11, 2025 at 5:28 PM
Tangent-point energy works for (2).

To incorporate (1) I might (strongly) penalize the distance from each data point p to the *closest* point on the curve. This encourages at least one point of the curve to pass through each data point, without pulling on the whole curve.
September 24, 2025 at 12:33 AM
Thanks for the thought-provoking example. 😊
September 19, 2025 at 1:29 PM
Reminds me of the Kahneman and Tversky experiments (“Steve is more likely to be a librarian than a farmer.”) If LLMs are trained on human-generated text, it doesn’t seem reasonable to expect them to be smarter than the average text-generating human. (Though they sometimes are anyway.)
September 19, 2025 at 1:28 PM
On the other hand, I was too dumb to recognize the subtlety on first glance. So maybe the model is “just as bad as a human?”
September 19, 2025 at 1:27 PM
So, in the the absence of any priors or additional information, 1/3 is a reasonable-ish approximation. But I agree it would be far better if the model simply said “that’s hard to answer because there are many ambiguous factors” (as I have).
September 19, 2025 at 1:26 PM
This one’s not so clear cut: “baby” is an ambiguous age range, and a baby can be a twin or triplet, born in any order. Even a newborn could have younger step siblings in rare cases.

We’re also presuming it’s a human baby, whereas other species have different life spans.
September 19, 2025 at 1:26 PM
Not seeing it. What’s wrong with this answer? (There are six possible permutations, but the other two siblings are interchangeable…)
September 17, 2025 at 9:47 PM
I adapted Unicodeit! (See the acknowledgment section on GitHub; also meant to mention that in the footer).

I had been using your website for years, but wanted something more integrated.

Thank you for contributing to open source. 😁
September 11, 2025 at 1:57 PM
(More seriously: if the geometry of the apples was well-captured by the artist, and the color is unique to that geometry, I would be willing to bet the answer is “yes.”)
September 6, 2025 at 11:09 PM
If it began life as a drawing, is that question even well-posed?
September 6, 2025 at 11:04 PM
Oh, you wrote a book on this stuff. I guess I didn't need to be quite so didactic in my response! ;-)
September 6, 2025 at 9:51 PM
(But I take your point: it's hard to get all these different nuances across precisely in diagrams. That's why we also have mathematical notation to go along with the diagrams! :-) )
September 6, 2025 at 9:50 PM
Well, f maps *any* point of the data space to the latent space, and g maps *any* point of the latent space to the data space. I.e.,

f : ℝⁿ → ℝᵏ,
g : ℝᵏ → ℝⁿ.

The point x is just one example. So it might in fact be misleading to imply that f gets applied only to x, or that ends only at x̂.
September 6, 2025 at 9:49 PM
P.S. I should also mention that these diagrams were significantly improved via feedback from many folks from here and elsewhere.

Hopefully they account for some of the gripes—if not, I'm ready for the next batch! 😉

bsky.app/profile/keen...
I can't* fathom why the top picture, and not the bottom picture, is the standard diagram for an autoencoder.

The whole idea of an autoencoder is that you complete a round trip and seek cycle consistency—why lay out the network linearly?
September 6, 2025 at 9:20 PM