Keenan Crane
banner
keenancrane.bsky.social
Keenan Crane
@keenancrane.bsky.social
Digital Geometer, Associate Professor of Computer Science & Robotics at Carnegie Mellon University. There are four lights.
https://www.cs.cmu.edu/~kmcrane/
I didn’t look much into the history of midsurfaces for this eversion, but am curious to know what has been said by Gardner and others. This one is different in spirit from midsurfaces used for sphere eversion (like Kusner’s halfway surface) in the sense that it’s a stable rather than unstable…
October 22, 2025 at 5:23 AM
“Fair dice” might make you think of perfect cubes with equal frequencies (say, 1/6 on all sides) 🎲

But “fair” really just means you get the frequencies you expect (say, 1/4, 1/4 & 1/2)

We can now design fair dice with any frequencies—and any shape! 🐉

hbaktash.github.io/projects/put...
September 25, 2025 at 1:39 PM
I got tired of mashing together tools to write long threads with 𝐫𝐢𝐜𝐡 𝑓𝑜𝑟𝑚𝑎𝑡𝑡𝑖𝑛𝑔 and ℳα†ℏ—so I wrote La𝑇𝑤𝑒𝑒𝑡!

It converts Markdown and LaTeX to Unicode that can be used in “tweets”, and automatically splits long threads. Try it out!

keenancrane.github.io/LaTweet/
September 11, 2025 at 1:28 PM
Likewise, here's a simpler “implementation” diagram, that still retains the most important idea of an *auto*-encoder, namely, that you're comparing the output against *itself*.
September 6, 2025 at 9:20 PM
Personally, I find both of these diagrams a little bit crowded—here's a simpler “representation” diagram, with fewer annotations (that might anyway be better explained in accompanying text).
September 6, 2025 at 9:20 PM
Here's a way of visualizing the maps *defined by* an autoencoder.

The encoder f maps high-dimensional data x to low-dimensional latents z. The decoder tries to map z back to x. We *always* learn a k-dimensional submanifold M, which is reliable only where we have many samples z.
September 6, 2025 at 9:20 PM
With autoencoders, the first (and last) picture we see often looks like this one: a network architecture diagram, where inputs get “compressed”, then decoded.

If we're lucky, someone bothers to draw arrows that illustrate the main point: we want the output to look like the input!
September 6, 2025 at 9:20 PM
A similar thing happens when (many) people learn linear algebra:

They confuse the representation (matrices) with the objects represented by those matrices (linear maps… or is it a quadratic form?)
September 6, 2025 at 9:20 PM
“Everyone knows” what an autoencoder is… but there's an important complementary picture missing from most introductory material.

In short: we emphasize how autoencoders are implemented—but not always what they represent (and some of the implications of that representation).🧵
September 6, 2025 at 9:20 PM
(And perhaps its description is more in line with its own abilities—rather than our poor human inability?)
September 6, 2025 at 9:13 PM
Ok, but also this is quite remarkable.

Like any savant, you have to roll with the quirks. 😉
September 6, 2025 at 9:12 PM
It’s quite common in many areas of mathematics to draw a 2D surface in R^3 as a proxy for a k-manifold in R^n, for arbitrary k and n

Likewise, the height of the rectangles in the top diagram doesn’t literally correspond to the length of the data and latent vectors (the ration is often more extreme)
August 31, 2025 at 2:57 PM
Funny. I just… responded to a message eerily similar to this on another social media network! 😜
August 30, 2025 at 4:48 AM
Here I try to convey key ideas, like:

- The encoder tries to compress the data into a lower-dimensional space (left to right)
- The decoder attempts to invert the encoder (right to left)
- There's inevitabe error in reconstruction of a latent code (dashed line between x and x̂)
August 29, 2025 at 10:46 PM
*Of course I do in reality know why people use this diagram: it fits into a common visual language used for neural networks.

But it misses some critical features (like cycle consistency). And often adds other nutty stuff—like drawing functions as complete bipartite graphs!
August 29, 2025 at 10:46 PM
I can't* fathom why the top picture, and not the bottom picture, is the standard diagram for an autoencoder.

The whole idea of an autoencoder is that you complete a round trip and seek cycle consistency—why lay out the network linearly?
August 29, 2025 at 10:46 PM
I don't have a strong opinion about whether video models “understand the world.”

But I do think the first bar should be checking whether you can recover consistent geometry from video—not whether it makes accurate predictions of physics.
August 12, 2025 at 6:56 PM
Working on a Walk on Spheres tutorial for #SIGGRAPH2025, and love the ads I'm getting served. 😂

Stay tuned for more…👣
June 29, 2025 at 9:52 AM
Quick “teaser” for a fun #SIGGRAPH2025 project, led by Hossein Baktash, on optimizing a shape to have the desired rolling statistics.

Basically we can turn arbitrary objects into fair dice, or make dice which capture the statistics of other objects—like several coin flips.
June 28, 2025 at 3:05 AM
Making good on this promise—in the fastest turnaround time ever—my collaborator Etienne Corman has just posted MATLAB code for #RectangularSurfaceParameterization here:

github.com/etcorman/Rec...

(C++ version is still in the works…)
June 26, 2025 at 3:34 PM
Code and other information coming soon; for now you can read the paper here:

www.cs.cmu.edu/~kmcrane/Pro...

And find some supplemental information—including pseudocode—here:

www.cs.cmu.edu/~kmcrane/Pro...
June 26, 2025 at 3:03 PM
Meshes with 90° angles are super useful, providing asymptotically faster convergence for finite element simulation, and optimal shape approximation (when aligned with curvature).

Amazingly, no past quad meshing method could guarantee 90° angles under refinement—until now. #RSP
June 26, 2025 at 3:03 PM
Very happy that Jiří Minarčík will join our research group at @cmu.edu, the Geometry Collective, as a Fulbright visiting scholar! 🥳

Jiří is a world expert in space curves, and one of the core contributors to Penrose (penrose.cs.cmu.edu). Check out his beautiful work here: minarcik.com
June 23, 2025 at 11:14 AM
Folks in the #SIGGRAPH community:

You may or may not be aware of the controversy around the next #SIGGRAPHAsia location, summarized here www.cs.toronto.edu/~jacobson/we...

If you're concerned consider signing this letter docs.google.com/document/d/1...
via this form
docs.google.com/forms/d/e/1F...
June 20, 2025 at 4:13 PM
Seems the paper link may be causing trouble for some.

Here’s an alternate link: cs.cmu.edu/~kmcrane/Pro...
June 20, 2025 at 12:39 PM