Rana Hanocka
ranahanocka.bsky.social
Rana Hanocka
@ranahanocka.bsky.social
Assistant Professor @uchicago @uchicagocs. PhD from @TelAvivUni. Interested in computer graphics, machine learning, & computer vision 🤖
This work was led by Sining Lu and Guan Chen, in collaboration with Nam Anh Dinh, Itai Lang, Ari Holtzman, and me.

Check out our paper: arxiv.org/abs/2508.08228
We’re still actively developing LL3M, and we’d love to hear your thoughts! 7/
August 15, 2025 at 4:16 AM
Another cool thing about LL3M: the Blender code it writes is actually readable. Clear structure, detailed comments, intuitive variable names. Easy to tweak a single parameter (e.g. key width) or even change the algorithmic logic (e.g. the keyboard pattern). 6/
August 15, 2025 at 4:16 AM
LL3M generates 3D assets in 3 phases with specialized agents:
1️⃣ Initial Creation → break prompt into subtasks, retrieve relevant code snippets (BlenderRAG)
2️⃣ Auto-refine → critic spots issues, verification checks fixes
3️⃣ User-guided → iterative edits via user feedback
5/
August 15, 2025 at 4:16 AM
LL3M can create a wide range of shapes, without requiring specialized 3D datasets or fine-tuning. Every asset created is represented under the hood as editable Blender code. 4/
August 15, 2025 at 4:16 AM
Even non-experts can jump right in and easily edit 3D shapes. Blender code created by LL3M generates a node graph that is packed with tunable parameters, enabling users to tweak colors, textures, patterns, lengths, heights, and more. 3/
August 15, 2025 at 4:16 AM
What we ❤️ about LL3M: You're in the loop! If you want to make a tweak, LL3M can be your collaborative 3D design partner. And there's no need to regenerate the entire model each time - target a specific part, provide follow-up prompts, and the rest stays intact. 2/
August 15, 2025 at 4:16 AM