Rana Hanocka
ranahanocka.bsky.social
Rana Hanocka
@ranahanocka.bsky.social
Assistant Professor @uchicago @uchicagocs. PhD from @TelAvivUni. Interested in computer graphics, machine learning, & computer vision ๐Ÿค–
Pinned
Weโ€™ve been building something weโ€™re ๐‘Ÿ๐‘’๐‘Ž๐‘™๐‘™๐‘ฆ excited about โ€“ LL3M: LLM-powered agents that turn text into editable 3D assets. LL3M models shapes as interpretable Blender code, making geometry, appearance, and style easy to modify. ๐Ÿ”— threedle.github.io/ll3m 1/
Reposted by Rana Hanocka
Excited to share our #ICCV2025 work Reusing Computation in Text-to-Image Diffusion for Efficient Generation of Image Sets!

Our method generates large sets of images using significantly less compute than standard diffusion.

๐Ÿ“Žhttps://ddecatur.github.io/hierarchical-diffusion/

1/
October 22, 2025 at 8:23 PM
This work was led by Sining Lu and Guan Chen, in collaboration with Nam Anh Dinh, Itai Lang, Ari Holtzman, and me.

Check out our paper: arxiv.org/abs/2508.08228
Weโ€™re still actively developing LL3M, and weโ€™d love to hear your thoughts! 7/
August 15, 2025 at 4:16 AM
Another cool thing about LL3M: the Blender code it writes is actually readable. Clear structure, detailed comments, intuitive variable names. Easy to tweak a single parameter (e.g. key width) or even change the algorithmic logic (e.g. the keyboard pattern). 6/
August 15, 2025 at 4:16 AM
LL3M generates 3D assets in 3 phases with specialized agents:
1๏ธโƒฃ Initial Creation โ†’ break prompt into subtasks, retrieve relevant code snippets (BlenderRAG)
2๏ธโƒฃ Auto-refine โ†’ critic spots issues, verification checks fixes
3๏ธโƒฃ User-guided โ†’ iterative edits via user feedback
5/
August 15, 2025 at 4:16 AM
LL3M can create a wide range of shapes, without requiring specialized 3D datasets or fine-tuning. Every asset created is represented under the hood as editable Blender code. 4/
August 15, 2025 at 4:16 AM
Even non-experts can jump right in and easily edit 3D shapes. Blender code created by LL3M generates a node graph that is packed with tunable parameters, enabling users to tweak colors, textures, patterns, lengths, heights, and more. 3/
August 15, 2025 at 4:16 AM
What we โค๏ธ about LL3M: You're in the loop! If you want to make a tweak, LL3M can be your collaborative 3D design partner. And there's no need to regenerate the entire model each time - target a specific part, provide follow-up prompts, and the rest stays intact. 2/
August 15, 2025 at 4:16 AM
Weโ€™ve been building something weโ€™re ๐‘Ÿ๐‘’๐‘Ž๐‘™๐‘™๐‘ฆ excited about โ€“ LL3M: LLM-powered agents that turn text into editable 3D assets. LL3M models shapes as interpretable Blender code, making geometry, appearance, and style easy to modify. ๐Ÿ”— threedle.github.io/ll3m 1/
August 15, 2025 at 4:16 AM
Our work โ€œGeometry in Styleโ€ will be presented at #CVPR2025 on Sunday at 4pm in ExHall D, poster 219. Drop by and say hi!

Our technique is capable of performing expressive text-driven deformations that preserve the input shape identity.
1/
June 15, 2025 at 6:33 PM
Reposted by Rana Hanocka
Big update from The Workshop on Computer Vision For Mixed Reality @ CVPR 2025:

๐Ÿ“ Papers and schedule are now up!

Our Speakers:
Richard Newcombe (Meta)
Anjul Patney (NVIDIA)
Rana Hanocka (UChicago)
Laura Leal-Taixe โ€ช(NVIDIA)
Margarita Grinvald (Meta)

๐Ÿ“… June 11, 8AM ๐Ÿ“Room 109.

cv4mr.github.io
June 3, 2025 at 3:50 PM
Congrats Brian and team on the best paper honorable mention at #3dv2025 ๐Ÿฅณ๐ŸŽ‰
Brian (hywkim-brian.github.io/site/) is going to start his PhD next year at Columbia with @silviasellan.bsky.social Make sure to watch out for their awesome works ๐Ÿ“ˆ๐Ÿ“ˆ๐Ÿ“ˆ
The Best paper Honorable Mention Award goes to MeshUp!

#3DV2025
March 26, 2025 at 3:53 AM
Reposted by Rana Hanocka
My colleague Rana Hanocka does exciting work on the frontier for 3D graphics and AI/ML, and sheโ€™s recruiting PhD students this cycle!
December 11, 2024 at 6:18 AM