Jaemin Cho
jmincho.bsky.social
Jaemin Cho
@jmincho.bsky.social
Incoming assistant professor at JHU CS & Young Investigator at AI2
PhD at UNC
https://j-min.io
#multimodal #nlp
Pinned
Some personal updates:
- I've completed my PhD at @unccs.bsky.social! ๐ŸŽ“
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor ๐Ÿ’™
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! ๐Ÿ”Ž
Reposted by Jaemin Cho
Join us in advancing data science and AI research! The Johns Hopkins Data Science and AI Institute Postdoctoral Fellowship Program is now accepting applications for the 2026โ€“2027 academic year. Apply now! Deadline: Jan 23, 2026. Details and apply: apply.interfolio.com/179059
December 19, 2025 at 1:29 PM
I know these are far from perfect, but I hope they offer some help as yet another reference as you navigate your own applications. Good luck, everyone!
September 23, 2025 at 4:57 PM
It's application season, and I'm sharing some of my past application materials:
- Academic job market (written in Dec 2024)
- PhD fellowship (written in Apr 2023)
- PhD admission (written in Dec 2019)
on my website (j-min.io)
Jaemin Cho
Jaemin Cho Academic website.
j-min.io
September 23, 2025 at 4:57 PM
Thanks! Super excited to collaborate with all the amazing folks at JHU CS ๐Ÿ˜Š
July 3, 2025 at 12:55 PM
Reposted by Jaemin Cho
Welcome to JHU! ๐Ÿ’™
Some personal updates:
- I've completed my PhD at @unccs.bsky.social! ๐ŸŽ“
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor ๐Ÿ’™
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! ๐Ÿ”Ž
July 2, 2025 at 1:21 PM
Cool work! Any chance that the name comes after this K-pop group - en.wikipedia.org/wiki/Ive_(gr...? ๐Ÿ˜†
Ive (group) - Wikipedia
en.wikipedia.org
May 21, 2025 at 5:45 PM
Thanks Ana!
May 21, 2025 at 2:18 PM
Thanks Benno!
May 20, 2025 at 9:09 PM
Thanks Mohit for all the support and guidance! It has been a great pleasure to have you as my advisor and to be part of the amazing group for the last 5 years. I have learned so much from you ๐Ÿ™
May 20, 2025 at 6:23 PM
Also, a heartfelt shoutout to all the collaborators Iโ€™ve worked with over the yearsโ€”your ideas, encouragement, and hustle have meant the world. Excited for whatโ€™s ahead. Letโ€™s keep building together! โค๏ธ
May 20, 2025 at 6:00 PM
Endless thanks to my amazing advisor @mohitbansal.bsky.social, the UNC NLP group, my partner @heesoojang.bsky.social, and my family. I couldnโ€™t have done this without your constant support ๐Ÿ™
May 20, 2025 at 5:59 PM
Some personal updates:
- I've completed my PhD at @unccs.bsky.social! ๐ŸŽ“
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor ๐Ÿ’™
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! ๐Ÿ”Ž
May 20, 2025 at 5:58 PM
Reposted by Jaemin Cho
๐Ÿšจ Introducing our @tmlrorg.bsky.social paper โ€œUnlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluationโ€
We present UnLOK-VQA, a benchmark to evaluate unlearning in vision-and-language models, where both images and text may encode sensitive or private information.
May 7, 2025 at 6:55 PM
Reposted by Jaemin Cho
๐Ÿ”ฅ BIG CONGRATS to Elias (and UT Austin)! Really proud of you -- it has been a complete pleasure to work with Elias and see him grow into a strong PI on *all* axes ๐Ÿค—

Make sure to apply for your PhD with him -- he is an amazing advisor and person! ๐Ÿ’™
Extremely excited to announce that I will be joining
@utaustin.bsky.social Computer Science in August 2025 as an Assistant Professor! ๐ŸŽ‰
May 5, 2025 at 10:00 PM
Reposted by Jaemin Cho
Extremely excited to announce that I will be joining
@utaustin.bsky.social Computer Science in August 2025 as an Assistant Professor! ๐ŸŽ‰
May 5, 2025 at 8:28 PM
Reposted by Jaemin Cho
I will be presenting โœจReverse Thinking Makes LLMs Stronger Reasonersโœจat #NAACL2025!

In this work, we show
- Improvements across 12 datasets
- Outperforms SFT with 10x more data
- Strong generalization to OOD datasets

๐Ÿ“…4/30 2:00-3:30 Hall 3

Let's chat about LLM reasoning and its future directions!
๐Ÿšจ Reverse Thinking Makes LLMs Stronger Reasoners

We can often reason from a problem to a solution and also in reverse to enhance our overall reasoning. RevThink shows that LLMs can also benefit from reverse thinking ๐Ÿ‘‰ 13.53% gains + sample efficiency + strong generalization (on 4 OOD datasets)!
April 29, 2025 at 11:21 PM
Reposted by Jaemin Cho
โœˆ๏ธ Heading to #NAACL2025 to present 3 main conf. papers, covering training LLMs to balance accepting and rejecting persuasion, multi-agent refinement for more faithful generation, and adaptively addressing varying knowledge conflict.

Reach out if you want to chat!
April 29, 2025 at 5:52 PM
Reposted by Jaemin Cho
Check out ๐ŸšจCAPTURe๐Ÿšจ -- a new benchmark testing spatial reasoning by making VLMs count objects under occlusion.

SOTA VLMs (GPT-4o, Qwen2-VL, Intern-VL2) have high error rates on CAPTURe (but humans have low error โœ…) and models struggle to reason about occluded objects.

arxiv.org/abs/2504.15485

๐Ÿงต๐Ÿ‘‡
April 24, 2025 at 3:14 PM
Reposted by Jaemin Cho
In Singapore for #ICLR2025 this week to present papers + keynotes ๐Ÿ‘‡, and looking forward to seeing everyone -- happy to chat about research, or faculty+postdoc+phd positions, or simply hanging out (feel free to ping)! ๐Ÿ™‚

Also meet our awesome students/postdocs/collaborators presenting their work.
April 21, 2025 at 4:50 PM
Reposted by Jaemin Cho
What if we could transform advanced math problems into abstract programs that can generate endless, verifiable problem variants?

Presenting EFAGen, which automatically transforms static advanced math problems into their corresponding executable functional abstractions (EFAs).
๐Ÿงต๐Ÿ‘‡
April 15, 2025 at 7:37 PM
Reposted by Jaemin Cho
๐ŸšจAnnouncing TaCQ ๐Ÿšจ a new mixed-precision quantization method that identifies critical weights to preserve. We integrate key ideas from circuit discovery, model editing, and input attribution to improve low-bit quant., w/ 96% 16-bit acc. at 3.1 avg bits (~6x compression)

๐Ÿ“ƒ arxiv.org/abs/2504.07389
April 12, 2025 at 2:19 PM
Huge congrats Archiki! ๐ŸŽ‰ Very well-deserved ๐Ÿ’ช
๐Ÿฅณ๐Ÿฅณ Honored and grateful to be awarded the 2025 Apple Scholars in AI/ML PhD Fellowship! โœจ

Huge shoutout to my advisor @mohitbansal.bsky.social, & many thanks to my lab mates @unccs.bsky.social , past collaborators + internship advisors for their support โ˜บ๏ธ๐Ÿ™

machinelearning.apple.com/updates/appl...
March 27, 2025 at 7:57 PM
Reposted by Jaemin Cho
Introducing VEGGIE ๐Ÿฅฆโ€”a unified, end-to-end, and versatile instructional video generative model.

VEGGIE supports 8 skills, from object addition/removal/changing, and stylization to concept grounding/reasoning. It exceeds SoTA and shows 0-shot multimodal instructional & in-context video editing.
March 19, 2025 at 6:56 PM
Reposted by Jaemin Cho
๐Ÿšจ Introducing UPCORE, to balance deleting info from LLMs with keeping their other capabilities intact.

UPCORE selects a coreset of forget data, leading to a better trade-off across 2 datasets and 3 unlearning methods.

๐Ÿงต๐Ÿ‘‡
February 25, 2025 at 2:23 AM
Reposted by Jaemin Cho
SO excited to see this one released! Several works, including our TMLRโ€™24 paper, are doubtful about measuring faithfulness purely behaviorally. @mtutek.bsky.social has formulated how to measure faithfulness by actually connecting verbalized CoT reasoning to weights. See more insights in his thread ๐Ÿ‘‡๐Ÿป
๐Ÿšจ๐Ÿšจ New preprint ๐Ÿšจ๐Ÿšจ

Ever wonder whether verbalized CoTs correspond to the internal reasoning process of the model?

We propose a novel parametric faithfulness approach, which erases information contained in CoT steps from the model parameters to assess CoT faithfulness.

arxiv.org/abs/2502.14829
Measuring Faithfulness of Chains of Thought by Unlearning Reasoning Steps
When prompted to think step-by-step, language models (LMs) produce a chain of thought (CoT), a sequence of reasoning steps that the model supposedly used to produce its prediction. However, despite mu...
arxiv.org
February 21, 2025 at 6:48 PM