ex<<<3,1>>>({MIT_IBM_Watson, Adobe, Amazon});
Make the community better @ACLMentorship @GrowAILikeChild
Herborium Lover, Fortune Teller, Pokémon Trainer, Szechuan Cuisine Chef.
https://mars-tin.github.io
An Open-Notebook Exploration of Emergent Grounding in LMs mars-tin.github.io/blogs/posts/...
An Open-Notebook Exploration of Emergent Grounding in LMs mars-tin.github.io/blogs/posts/...
Jane and Joyce will be presenting our work. :)
Jane is an exceptional undergraduate researcher and a great collaborator! Go meet her at COLM if you’re curious about her work on mechanistic interpretability, multimodality, & pragmatics!
We identify 3 key failures of pragmatic competence in referring expression generation with VLMs: (1) cannot uniquely refer to the referent, (2) include excessive or irrelevant information, and (3) misalign with human pragmatic preferences.
Jane and Joyce will be presenting our work. :)
Jane is an exceptional undergraduate researcher and a great collaborator! Go meet her at COLM if you’re curious about her work on mechanistic interpretability, multimodality, & pragmatics!
RTs & recommendations appreciated.
🛠️ Preferred:
• 5+ years in NLP research
• Git, CLI tools, Python, and basic HTML
• 2-year role, overlapping with current Co-CTO
Interested? DM @fredashi.bsky.social or email fhs@uwaterloo.ca
#ARR #ACL #NLProc
RTs & recommendations appreciated.
Join us in San Diego to push the frontiers of spatial understanding and reasoning across CV, NLP, and robotics!
👉 space-in-vision-language-embodied-ai.github.io
Join us in San Diego to push the frontiers of spatial understanding and reasoning across CV, NLP, and robotics!
👉 space-in-vision-language-embodied-ai.github.io
A thread (1/n) - #ICML2025 ✅
A thread (1/n) - #ICML2025 ✅
However, humans, since an extremely age 🧒, are extremely sensitive to other people's gaze 🙄 👀
No mentors, no labs, only pre-doc students, 111 VLMs, and we did it 😎
However, humans, since an extremely age 🧒, are extremely sensitive to other people's gaze 🙄 👀
No mentors, no labs, only pre-doc students, 111 VLMs, and we did it 😎
Mark your calendar: May 3rd, 14:00-17:30, Ballroom A.
Another exciting collaboration with @marstin.bsky.social @kordjamshidi.bsky.social, Jiayuan, and Joyce!
We identify 3 key failures of pragmatic competence in referring expression generation with VLMs: (1) cannot uniquely refer to the referent, (2) include excessive or irrelevant information, and (3) misalign with human pragmatic preferences.
We identify 3 key failures of pragmatic competence in referring expression generation with VLMs: (1) cannot uniquely refer to the referent, (2) include excessive or irrelevant information, and (3) misalign with human pragmatic preferences.
@naaclmeeting.bsky.social #NAACL2025
Mentors:
• @amuuueller.bsky.social
• @fredashi.bsky.social
• Jiayuan Mao
• @marstin.bsky.social
• Oana Ignat
• Weijia Shi
• @zhijingjin.bsky.social
@naaclmeeting.bsky.social #NAACL2025
Mentors:
• @amuuueller.bsky.social
• @fredashi.bsky.social
• Jiayuan Mao
• @marstin.bsky.social
• Oana Ignat
• Weijia Shi
• @zhijingjin.bsky.social
VEGGIE is an instructional video generative model trained solely with diffusion loss, designed for both video concept grounding and instruction-based editing. It effectively handles diverse video editing tasks by pixel-level grounded training in a multi-task learning setup. ⬇️
VEGGIE supports 8 skills, from object addition/removal/changing, and stylization to concept grounding/reasoning. It exceeds SoTA and shows 0-shot multimodal instructional & in-context video editing.
VEGGIE is an instructional video generative model trained solely with diffusion loss, designed for both video concept grounding and instruction-based editing. It effectively handles diverse video editing tasks by pixel-level grounded training in a multi-task learning setup. ⬇️
yoavartzi.com/pubs
Plz RT 🙏
yoavartzi.com/pubs
Plz RT 🙏
I'm excited to join #ACL @aclmentorship.bsky.social as a mentor! Your ideas can help make mentorship more impactful. Let’s plan together!🚀
💡https://forms.gle/dURA4QUANH3pBxBG8
I'm excited to join #ACL @aclmentorship.bsky.social as a mentor! Your ideas can help make mentorship more impactful. Let’s plan together!🚀
Also, consider joining the program committee and help shape the future of alignment research. Your reviews might just steer the field! forms.gle/MapXyCZhbcFr...
Also, consider joining the program committee and help shape the future of alignment research. Your reviews might just steer the field! forms.gle/MapXyCZhbcFr...
Consider submitting your work -full papers, extended abstracts, or cross-submissions!
✨ Direct paper submission deadline: Jan 30, 2025
✨ ARR commitment deadline: Feb 20, 2025
More details on our website: sites.google.com/view/repl4nl...
We call for non-archival papers: 2-page (tiny), 4-page (short), or 9-page (long).
👉 openreview.net/group?id=ICLR.…
See you in Singapore :)
We call for non-archival papers: 2-page (tiny), 4-page (short), or 9-page (long).
👉 openreview.net/group?id=ICLR.…
See you in Singapore :)
j-min.io
I work on ✨Multimodal AI✨, advancing reasoning in understanding & generation by:
1⃣ Making it scalable
2⃣ Making it faithful
3⃣ Evaluating + refining it
Completing my PhD at UNC (w/ @mohitbansal.bsky.social).
Happy to connect (will be at #NeurIPS2024)!
👇🧵
Find me at
- Dec 10: Future NLP Workshop with UBC NLP
- Dec 12: Vector Luncheon
- Dec 14: Pluralistic Alignment Workshop, presenting the VLM frame of reference analysis work with the amazing @marstin.bsky.social
Find me at
- Dec 10: Future NLP Workshop with UBC NLP
- Dec 12: Vector Luncheon
- Dec 14: Pluralistic Alignment Workshop, presenting the VLM frame of reference analysis work with the amazing @marstin.bsky.social
1. Presenting perspective-taking in machine theory of mind at the UMich tomorrow.
2. Visiting the UW on Dec 5 to talk about language grounding, let’s connect if you’re in Seattle.
3. Attending #NeurIPS2024 in Vancouver with a conference and workshop paper, DMs open!
1. Presenting perspective-taking in machine theory of mind at the UMich tomorrow.
2. Visiting the UW on Dec 5 to talk about language grounding, let’s connect if you’re in Seattle.
3. Attending #NeurIPS2024 in Vancouver with a conference and workshop paper, DMs open!