@Autodesk • Fellow @StanfordHAI • Prev: MSc CS @ETH
Test-time scaling with Best-of-N shows further potential to improve preference alignment on this task! (7/8)
Test-time scaling with Best-of-N shows further potential to improve preference alignment on this task! (7/8)
We train SG-LLM via SFT+GRPO—first to apply preference alignment with verifiable rewards for 3D scene synthesis. (3/8)
We train SG-LLM via SFT+GRPO—first to apply preference alignment with verifiable rewards for 3D scene synthesis. (3/8)
Lightweight, interpretable, and editable.
We then formulate scene synthesis and editing as next-token prediction. (2/8)
Lightweight, interpretable, and editable.
We then formulate scene synthesis and editing as next-token prediction. (2/8)
**ReSpace: Text-Driven 3D Scene Synthesis and Editing with Preference Alignment**
Add, remove, and swap objects simply via natural language, e.g., "add tufted dark gray sofa". (1/8)
**ReSpace: Text-Driven 3D Scene Synthesis and Editing with Preference Alignment**
Add, remove, and swap objects simply via natural language, e.g., "add tufted dark gray sofa". (1/8)