Martin Ziqiao Ma
banner
marstin.bsky.social
Martin Ziqiao Ma
@marstin.bsky.social
https://mars-tin.github.io

phd<<<1,1>>>(UMich);
ex<<<3,1>>>({MIT_IBM_Watson, Adobe, Amazon});

Make the community better @ACLMentorship @GrowAI

Herborium Lover, Fortune Teller, Pokémon Trainer, Szechuan Cuisine Chef.
Congratulations!!
July 22, 2025 at 5:35 AM
with @fredashi.bsky.social / Jiayuan Mao / @djiafei.bsky.social / @manlingli.bsky.social / David Hsu / Parisa Kordjamshidi
July 14, 2025 at 8:16 PM
Reposted by Martin Ziqiao Ma
& @tianminshu.bsky.social (+ @marstin.bsky.social, @zhitinghu.bsky.social, ‪@lianhui.bsky.social & more) will present “SimWorld: A World Simulator for Scaling Photorealistic Multi-Agent Interactions,” an @unrealengine.bsky.social-based sim that generates unlimited/diverse urban environments: (13/14)
SimWorld
SimWorld: A World Simulator for Scaling Photorealistic Multi-Agent Interactions
simworld-cvpr2025.maitrix.org
June 10, 2025 at 7:45 PM
We introduce RefOI, a new dataset of 1.5k objects, each with 3 written and 2 spoken human-produced referring expressions. We also release RefOI-TLHF, a large dataset of token-level human feedback for 10.6k referring expressions.

👀https://vlm-reg.github.io/
📄https://arxiv.org/abs/2504.16060
VLMs Are Not Pragmatically Competent in Referring Expression Generation
VLMs fail to refer like humans. Our study reveals widespread pragmatic issues in GPT-4o, LLaVA, and others, showing how their expressions often violate Gricean maxims.
vlm-reg.github.io
April 23, 2025 at 5:55 PM
🔹 Workshop Paper at World Models:
Do Vision-Language Models Have Internal World Models?
🗓 Apr 27, 9 p.m. (Peridot 201&206)

Paper: openreview.net/forum?id=tpP...

Excited for this collaboration with MaitrixOrg, details coming soon :)
Do Vision-Language Models Have Internal World Models? Towards an...
Internal world models (WMs) enable agents to understand the world's state and predict transitions, serving as the basis for advanced deliberative reasoning. Recent large Vision-Language Models...
openreview.net
April 19, 2025 at 1:53 AM
🔹 ICLR BiAlign Workshop:
We’re hosting the Bidirectional Human-AI Alignment Workshop (BiAlign).
🗓 Apr 28, (Garnet 216–214)

Website: bialign-workshop.github.io

I’ll join remotely — huge thanks to @huashen.bsky.social for leading this!
April 19, 2025 at 1:53 AM
🔹 ICLR Oral Paper:
Do Vision-Language Models Represent Space and How?

🗓 Oral: Apr 25, 3:42–3:54 a.m. (Session 4C)
🗓 Poster: Thu, Apr 24, 10 p.m.–12:30 a.m. (Hall 3 + 2B, #212)

Website: spatial-comfort.github.io

Big thanks to @fredashi.bsky.social for presenting on site!
April 19, 2025 at 1:53 AM
📄 View the full list of accepted papers: bialign-workshop.github.io#/papers

We look forward to seeing you there!
BiAlign: ICLR'25 Workshop on Bidirectional Human-AI Alignment
The official website for the ICLR BiAlign: Workshop on Bidirectional Human-AI Alignment
bialign-workshop.github.io
April 15, 2025 at 8:55 PM
🎉 Out of these, 72 papers were accepted, including 5 tiny papers. 10 papers were selected for oral presentations: 2 at CHI and 8 at ICLR. Award winners will be announced during the workshop!
April 15, 2025 at 8:55 PM
📬 We received over 100 submissions, each reviewed by 2–4 expert reviewers, with ethical assessments included when appropriate. Our program committee features leading researchers in NLP, RL, HCI, ML, and AI/ML Ethics, carefully selected based on scholarly merit and expertise.
April 15, 2025 at 8:55 PM
🙏 Special thanks to Tammy Masterson, Technical Partnerships Lead at the AI Security Institute, who will be joining us as a panelist.
April 15, 2025 at 8:55 PM
🙏 We are grateful to our gold sponsors, Prolific and Layer 6 AI of TD Bank Group, for their generous support in funding paper awards and travel grants.
April 15, 2025 at 8:55 PM