PhD at UNC
https://j-min.io
#multimodal #nlp
- I've completed my PhD at @unccs.bsky.social! ๐
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor ๐
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! ๐
- Academic job market (written in Dec 2024)
- PhD fellowship (written in Apr 2023)
- PhD admission (written in Dec 2019)
on my website (j-min.io)
- Academic job market (written in Dec 2024)
- PhD fellowship (written in Apr 2023)
- PhD admission (written in Dec 2019)
on my website (j-min.io)
- I've completed my PhD at @unccs.bsky.social! ๐
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor ๐
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! ๐
- I've completed my PhD at @unccs.bsky.social! ๐
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor ๐
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! ๐
- I've completed my PhD at @unccs.bsky.social! ๐
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor ๐
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! ๐
We present UnLOK-VQA, a benchmark to evaluate unlearning in vision-and-language models, where both images and text may encode sensitive or private information.
We present UnLOK-VQA, a benchmark to evaluate unlearning in vision-and-language models, where both images and text may encode sensitive or private information.
Make sure to apply for your PhD with him -- he is an amazing advisor and person! ๐
@utaustin.bsky.social Computer Science in August 2025 as an Assistant Professor! ๐
Make sure to apply for your PhD with him -- he is an amazing advisor and person! ๐
@utaustin.bsky.social Computer Science in August 2025 as an Assistant Professor! ๐
@utaustin.bsky.social Computer Science in August 2025 as an Assistant Professor! ๐
In this work, we show
- Improvements across 12 datasets
- Outperforms SFT with 10x more data
- Strong generalization to OOD datasets
๐ 4/30 2:00-3:30 Hall 3
Let's chat about LLM reasoning and its future directions!
We can often reason from a problem to a solution and also in reverse to enhance our overall reasoning. RevThink shows that LLMs can also benefit from reverse thinking ๐ 13.53% gains + sample efficiency + strong generalization (on 4 OOD datasets)!
In this work, we show
- Improvements across 12 datasets
- Outperforms SFT with 10x more data
- Strong generalization to OOD datasets
๐ 4/30 2:00-3:30 Hall 3
Let's chat about LLM reasoning and its future directions!
Reach out if you want to chat!
Reach out if you want to chat!
SOTA VLMs (GPT-4o, Qwen2-VL, Intern-VL2) have high error rates on CAPTURe (but humans have low error โ ) and models struggle to reason about occluded objects.
arxiv.org/abs/2504.15485
๐งต๐
SOTA VLMs (GPT-4o, Qwen2-VL, Intern-VL2) have high error rates on CAPTURe (but humans have low error โ ) and models struggle to reason about occluded objects.
arxiv.org/abs/2504.15485
๐งต๐
Also meet our awesome students/postdocs/collaborators presenting their work.
Also meet our awesome students/postdocs/collaborators presenting their work.
Presenting EFAGen, which automatically transforms static advanced math problems into their corresponding executable functional abstractions (EFAs).
๐งต๐
Presenting EFAGen, which automatically transforms static advanced math problems into their corresponding executable functional abstractions (EFAs).
๐งต๐
๐ arxiv.org/abs/2504.07389
๐ arxiv.org/abs/2504.07389
Huge shoutout to my advisor @mohitbansal.bsky.social, & many thanks to my lab mates @unccs.bsky.social , past collaborators + internship advisors for their support โบ๏ธ๐
machinelearning.apple.com/updates/appl...
VEGGIE supports 8 skills, from object addition/removal/changing, and stylization to concept grounding/reasoning. It exceeds SoTA and shows 0-shot multimodal instructional & in-context video editing.
VEGGIE supports 8 skills, from object addition/removal/changing, and stylization to concept grounding/reasoning. It exceeds SoTA and shows 0-shot multimodal instructional & in-context video editing.
UPCORE selects a coreset of forget data, leading to a better trade-off across 2 datasets and 3 unlearning methods.
๐งต๐
UPCORE selects a coreset of forget data, leading to a better trade-off across 2 datasets and 3 unlearning methods.
๐งต๐
Ever wonder whether verbalized CoTs correspond to the internal reasoning process of the model?
We propose a novel parametric faithfulness approach, which erases information contained in CoT steps from the model parameters to assess CoT faithfulness.
arxiv.org/abs/2502.14829