Xiangru (Edward) Jian
edwardjian.bsky.social
Xiangru (Edward) Jian
@edwardjian.bsky.social
CS PhD student at University of Waterloo. Visiting Researcher at ServiceNow Research. Working on Al and DB.
Interacting with desktop GUIs remains a challenge.
🖱️ Models struggle with click & drag actions due to poor grounding and limited motion understanding
🏆 UI-TARS leads across models!
🧠 Closed models (GPT-4o, Claude, Gemini) excel at planning but fail to localize.
March 24, 2025 at 5:08 PM
Detecting functional UI regions is tough!
🤖 Even top GUI agents miss functional regions.
🏆 Closed-source VLMs shine with stronger visual understanding.
📉 Cluttered UIs bring down IoU.
🚀 We’re the first to propose this task.
March 24, 2025 at 5:08 PM
Grounding UI elements is challenging!
🤖 Even top VLMs struggle with fine-grained GUI grounding.
📊 GUI agents like UI-TARS (25.5%) & UGround (23.2%) do better but still fall short.
⚠️ Small elements, dense UIs, and limited domain/spatial understanding are major hurdles.
March 24, 2025 at 5:08 PM
We propose three key benchmark tasks to evaluate GUI Agents
🔹 Element Grounding – Identify a UI element from the text
🔹 Layout Grounding – Understand UI layout structure & group elements
🔹 Action Prediction – Predict the next action given a goal, past actions & screen state
March 24, 2025 at 5:08 PM
UI-Vision consists of
✅ 83 open-source desktop apps across 6 domains
✅ 450 human demonstrations of computer-use workflows
✅ Human annotated dense bounding box annotations for UI elements and rich action trajectories
March 24, 2025 at 5:08 PM
Most GUI benchmarks focus on web or mobile.
🖥️ But what about desktop software, where most real work happens?
UI-Vision fills this gap by providing a large-scale benchmark with diverse and dense annotations to systematically evaluate GUI agents.
March 24, 2025 at 5:08 PM
🚀 Super excited to announce UI-Vision: the largest and most diverse desktop GUI benchmark for evaluating agents in real-world desktop GUIs in offline settings.

📄 Paper: arxiv.org/abs/2503.15661
🌐 Website: uivision.github.io

🧵 Key takeaways 👇
March 24, 2025 at 5:08 PM