Jaedong Hwang
banner
jaedonghwang.bsky.social
Jaedong Hwang
@jaedonghwang.bsky.social
PhD Student @MITEECS
https://jd730.github.io/
We have one poster in this afternoon's session at #ICML2025 (West Exhibition Hall B2-B3, W-414).
Unfortunately, none of the authors could attend the conference, but feel free to contact me if you have any questions!
icml.cc/virtual/2025...
July 16, 2025 at 1:17 PM
8/10
📊 On MGSM, BRIDGE improves both math and language accuracy in medium- and low-resource languages.
Even better:
• It maintains performance in English
• It succeeds where naive post-training and SFT or GRPO alone fail (especially in math).
July 15, 2025 at 3:44 PM
7/10
We also propose BRIDGE, a method that balances:
• Supervised fine-tuning for task-solving
• GRPO with a language consistency reward in reasoning.
This decouples multilingual ability from reasoning ability.
July 15, 2025 at 3:43 PM
6/10
GeoFact-X lets us evaluate not just what models predict, but how they think.
We measure:
• Answer correctness
• Reasoning quality
• Language consistency
Models do better on region-language aligned pairs vs. mismatched ones.
July 15, 2025 at 3:41 PM
5/10
We introduce GeoFact-X, the first benchmark to evaluate language-consistent reasoning.
🌍 It includes multilingual CoT QA across 5 regions × 5 languages (EN, JA, SW, HI, TH)=25 region-language pairs.
Questions are grounded in regional facts, each with step-by-step reasoning.
July 15, 2025 at 3:40 PM
4/10
We evaluate leading LLMs (e.g., Qwen2.5, LLaMA-3, Gemma-3, DeepSeek-R1) on MGSM with native-language CoT.
🔍 Result:
Many models get the correct answer but default to English for reasoning, even when prompted otherwise.
That’s a serious misalignment.
July 15, 2025 at 3:40 PM
🧵1/10
LLMs can answer in many languages.
But do they think in them?
Even when prompted in Swahili or Thai, models often switch to English for reasoning.
This breaks interpretability and trust.
So we ask: Can LLMs reason in the input language?
July 15, 2025 at 3:39 PM
If I remember correctly, that was also the first CV conference with over 1000 papers, and people already felt overwhelmed. Now, CVPR 2025 has 2800+ papers, and #NeurIPS2024 had 4497. It’s becoming nearly impossible to discover hidden gems while wandering poster sessions. 2/2
June 12, 2025 at 12:26 AM
#CVPR2025 Six years have passed since the 'Computer Vision After 5 Years' workshop at CVPR 2019. In it, Bill Freeman predicted that vision-science-inspired algorithms would lead the way. Instead, the field is now dominated by generative AI and foundation models. 1/2
June 12, 2025 at 12:26 AM
We learned the bitter lession that a poster should be checked before the poster session #ICLR2025.
Thank you all for coming and we are delight that you enjoyed our mistakes.
We are also highly appeciate authors of MMSearch allowing us to use their panel.
April 26, 2025 at 10:32 AM
📢 Excited to share that I will be presenting our paper on Neuro-Inspired SLAM at #ICLR2025 TOMORROW!
🗓 Saturday, April 26th 10:00 - 12:30 pm
📍 Hall 3 (Poster #55)
jd730.github.io/projects/FAR...
April 25, 2025 at 1:31 PM