Jaemin Cho
jmincho.bsky.social
Jaemin Cho
@jmincho.bsky.social
Incoming assistant professor at JHU CS & Young Investigator at AI2
PhD at UNC
https://j-min.io
#multimodal #nlp
Pinned
Some personal updates:
- I've completed my PhD at @unccs.bsky.social! 🎓
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor 💙
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! 🔎
It's application season, and I'm sharing some of my past application materials:
- Academic job market (written in Dec 2024)
- PhD fellowship (written in Apr 2023)
- PhD admission (written in Dec 2019)
on my website (j-min.io)
Jaemin Cho
Jaemin Cho Academic website.
j-min.io
September 23, 2025 at 4:57 PM
Reposted by Jaemin Cho
Welcome to JHU! 💙
Some personal updates:
- I've completed my PhD at @unccs.bsky.social! 🎓
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor 💙
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! 🔎
July 2, 2025 at 1:21 PM
Some personal updates:
- I've completed my PhD at @unccs.bsky.social! 🎓
- Starting Fall 2026, I'll be joining the CS dept. at Johns Hopkins University @jhucompsci.bsky.social as an Assistant Professor 💙
- Currently exploring options for my gap year (Aug 2025 - Jul 2026), so feel free to reach out! 🔎
May 20, 2025 at 5:58 PM
Reposted by Jaemin Cho
🚨 Introducing our @tmlrorg.bsky.social paper “Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation”
We present UnLOK-VQA, a benchmark to evaluate unlearning in vision-and-language models, where both images and text may encode sensitive or private information.
May 7, 2025 at 6:55 PM
Reposted by Jaemin Cho
🔥 BIG CONGRATS to Elias (and UT Austin)! Really proud of you -- it has been a complete pleasure to work with Elias and see him grow into a strong PI on *all* axes 🤗

Make sure to apply for your PhD with him -- he is an amazing advisor and person! 💙
Extremely excited to announce that I will be joining
@utaustin.bsky.social Computer Science in August 2025 as an Assistant Professor! 🎉
May 5, 2025 at 10:00 PM
Reposted by Jaemin Cho
Extremely excited to announce that I will be joining
@utaustin.bsky.social Computer Science in August 2025 as an Assistant Professor! 🎉
May 5, 2025 at 8:28 PM
Reposted by Jaemin Cho
I will be presenting ✨Reverse Thinking Makes LLMs Stronger Reasoners✨at #NAACL2025!

In this work, we show
- Improvements across 12 datasets
- Outperforms SFT with 10x more data
- Strong generalization to OOD datasets

📅4/30 2:00-3:30 Hall 3

Let's chat about LLM reasoning and its future directions!
🚨 Reverse Thinking Makes LLMs Stronger Reasoners

We can often reason from a problem to a solution and also in reverse to enhance our overall reasoning. RevThink shows that LLMs can also benefit from reverse thinking 👉 13.53% gains + sample efficiency + strong generalization (on 4 OOD datasets)!
April 29, 2025 at 11:21 PM
Reposted by Jaemin Cho
✈️ Heading to #NAACL2025 to present 3 main conf. papers, covering training LLMs to balance accepting and rejecting persuasion, multi-agent refinement for more faithful generation, and adaptively addressing varying knowledge conflict.

Reach out if you want to chat!
April 29, 2025 at 5:52 PM
Reposted by Jaemin Cho
Check out 🚨CAPTURe🚨 -- a new benchmark testing spatial reasoning by making VLMs count objects under occlusion.

SOTA VLMs (GPT-4o, Qwen2-VL, Intern-VL2) have high error rates on CAPTURe (but humans have low error ✅) and models struggle to reason about occluded objects.

arxiv.org/abs/2504.15485

🧵👇
April 24, 2025 at 3:14 PM
Reposted by Jaemin Cho
In Singapore for #ICLR2025 this week to present papers + keynotes 👇, and looking forward to seeing everyone -- happy to chat about research, or faculty+postdoc+phd positions, or simply hanging out (feel free to ping)! 🙂

Also meet our awesome students/postdocs/collaborators presenting their work.
April 21, 2025 at 4:50 PM
Reposted by Jaemin Cho
What if we could transform advanced math problems into abstract programs that can generate endless, verifiable problem variants?

Presenting EFAGen, which automatically transforms static advanced math problems into their corresponding executable functional abstractions (EFAs).
🧵👇
April 15, 2025 at 7:37 PM
Reposted by Jaemin Cho
🚨Announcing TaCQ 🚨 a new mixed-precision quantization method that identifies critical weights to preserve. We integrate key ideas from circuit discovery, model editing, and input attribution to improve low-bit quant., w/ 96% 16-bit acc. at 3.1 avg bits (~6x compression)

📃 arxiv.org/abs/2504.07389
April 12, 2025 at 2:19 PM
Huge congrats Archiki! 🎉 Very well-deserved 💪
🥳🥳 Honored and grateful to be awarded the 2025 Apple Scholars in AI/ML PhD Fellowship! ✨

Huge shoutout to my advisor @mohitbansal.bsky.social, & many thanks to my lab mates @unccs.bsky.social , past collaborators + internship advisors for their support ☺️🙏

machinelearning.apple.com/updates/appl...
March 27, 2025 at 7:57 PM
Reposted by Jaemin Cho
Introducing VEGGIE 🥦—a unified, end-to-end, and versatile instructional video generative model.

VEGGIE supports 8 skills, from object addition/removal/changing, and stylization to concept grounding/reasoning. It exceeds SoTA and shows 0-shot multimodal instructional & in-context video editing.
March 19, 2025 at 6:56 PM
Reposted by Jaemin Cho
🚨 Introducing UPCORE, to balance deleting info from LLMs with keeping their other capabilities intact.

UPCORE selects a coreset of forget data, leading to a better trade-off across 2 datasets and 3 unlearning methods.

🧵👇
February 25, 2025 at 2:23 AM
Reposted by Jaemin Cho
SO excited to see this one released! Several works, including our TMLR’24 paper, are doubtful about measuring faithfulness purely behaviorally. @mtutek.bsky.social has formulated how to measure faithfulness by actually connecting verbalized CoT reasoning to weights. See more insights in his thread 👇🏻
🚨🚨 New preprint 🚨🚨

Ever wonder whether verbalized CoTs correspond to the internal reasoning process of the model?

We propose a novel parametric faithfulness approach, which erases information contained in CoT steps from the model parameters to assess CoT faithfulness.

arxiv.org/abs/2502.14829
Measuring Faithfulness of Chains of Thought by Unlearning Reasoning Steps
When prompted to think step-by-step, language models (LMs) produce a chain of thought (CoT), a sequence of reasoning steps that the model supposedly used to produce its prediction. However, despite mu...
arxiv.org
February 21, 2025 at 6:48 PM
Reposted by Jaemin Cho
February 17, 2025 at 7:54 PM
We release code for the M3DocRAG experiments and M3DocVQA dataset creation!
Code 👉 github.com/bloomberg/m3...
Thread explaining M3DocRAG/M3DocVQA 👉 x.com/jmin__cho/st...
GitHub - bloomberg/m3docrag
Contribute to bloomberg/m3docrag development by creating an account on GitHub.
github.com
February 5, 2025 at 3:14 PM
Reposted by Jaemin Cho
🚨 Excited to announce UTGen and UTDebug, where we first learn to generate unit tests and then apply them to debugging generated code with LLMs, with strong gains (+12% pass@1) on LLM-based debugging across multiple models/datasets via inf.-time scaling and cross-validation+backtracking!

🧵👇
🚨 Excited to share: "Learning to Generate Unit Tests for Automated Debugging" 🚨
which introduces ✨UTGen and UTDebug✨ for teaching LLMs to generate unit tests (UTs) and debugging code from generated tests.

UTGen+UTDebug yields large gains in debugging (+12% pass@1) & addresses 3 key questions:

🧵👇
February 4, 2025 at 7:13 PM
Reposted by Jaemin Cho
🚨 Excited to share: "Learning to Generate Unit Tests for Automated Debugging" 🚨
which introduces ✨UTGen and UTDebug✨ for teaching LLMs to generate unit tests (UTs) and debugging code from generated tests.

UTGen+UTDebug yields large gains in debugging (+12% pass@1) & addresses 3 key questions:

🧵👇
February 4, 2025 at 7:10 PM
Reposted by Jaemin Cho
🎉Very excited that our work on Persuasion-Balanced Training has been accepted to #NAACL2025! We introduce a multi-agent tree-based method for teaching models to balance:

1️⃣ Accepting persuasion when it helps
2️⃣ Resisting persuasion when it hurts (e.g. misinformation)

arxiv.org/abs/2410.14596
🧵 1/4
January 23, 2025 at 4:51 PM
Big congratulations to my advisor Mohit! 🎉 Glad to see his significant+sustained contributions have been recognized as a #AAAI Fellow (following the prestigious #PECASE award). Truly well-deserved; congrats! 🙂
Thanks @AAAI for selecting me as a #AAAI Fellow! Very humbled+excited to be a part of the respected cohort of this+past years' fellows (& congrats everyone)! 🙏

100% credit goes to my amazing past/current students+postdocs+collab for their work (& thanks to mentors+family)!💙
aaai.org/about-aaai/a...
🎉Congratulations to Prof. @mohitbansal.bsky.social on being named a 2025 @RealAAAI Fellow for "significant contributions to multimodal AI foundations & faithful language generation and summarization." 👏

16 Fellows chosen worldwide by cmte. of 9 past fellows & ex-president: aaai.org/about-aaai/a...
January 21, 2025 at 7:29 PM
Check out our postdoc openings at UNC-CH!

In addition to the valuable research opportunities, please also note that Chapel Hill and the research triangle are great places to live, with nice weather, food, and people! I truly have enjoyed living here 😊
🚨 We have postdoc openings at UNC 🙂

Exciting+diverse NLP/CV/ML topics**, freedom to create research agenda, competitive funding, very strong students, mentorship for grant writing, collabs w/ many faculty+universities+companies, superb quality of life/weather.

Please apply + help spread the word 🙏
December 26, 2024 at 3:55 PM
Reposted by Jaemin Cho
🚨 We have postdoc openings at UNC 🙂

Exciting+diverse NLP/CV/ML topics**, freedom to create research agenda, competitive funding, very strong students, mentorship for grant writing, collabs w/ many faculty+universities+companies, superb quality of life/weather.

Please apply + help spread the word 🙏
December 23, 2024 at 7:32 PM
Reposted by Jaemin Cho
Excited to attend #NeurIPS and give a talk on video-language models for complex video understanding in the First Workshop on Video-Language Models on Saturday at 10:10am PST. Stop by + DM/email if you want to chat about anything related to video-language modeling.
December 12, 2024 at 5:26 PM