Jirui Qi
jiruiqi.bsky.social
Jirui Qi
@jiruiqi.bsky.social
Ph.D Candidate @GroNLP, University of Groningen #NLProc
https://betswish.github.io
Reposted by Jirui Qi
InCLow topics #EMNLP2025:

- MT error prediction techniques & its reception by professional translators (@gsarti.com)
- thinking language in Large Reasoning Models (@jiruiqi.bsky.social)
- effect of stereotypes on LLM’s implicit personalization (@veraneplenbroek.bsky.social)

....
October 31, 2025 at 10:50 PM
[11/] Besides, increasing the instances doesn't reliably mitigate the issue. When increasing from 100 to 250 training instances, the post-trained LRMs suffer from a drop in matching rate, while accuracy exhibits only marginal recovery, far below the accuracy of the original LRM.
May 30, 2025 at 1:09 PM
[10/] The results show that post-training on merely 100 instances sharply increases the matching rate to nearly 100% for TH and TE and to 80% for JA, but decreases accuracy, demonstrating the effectiveness of post-training to improve language matching, but the trade-off persists.
May 30, 2025 at 1:09 PM
[9/] To see whether further training can help, we post-train on Distilled-R1-7B using mini training sets of 100 or 250 instances per poor-matching language (Japanese, Thai, Telugu), resulting in six post-trained LRMs. The training data are filtered and translated from LIMO.
May 30, 2025 at 1:09 PM
[8/] Corresponding to the heatmaps, we further analyze the actual thinking languages of the LRM, where a clear mismatch is observed. Besides, all mismatches (i.e., red marks) fall into English or Chinese, suggesting the impact of thinking data on the model’s reasoning capability.
May 30, 2025 at 1:09 PM
[7/] Interestingly, reasoning in English consistently results in higher accuracy, especially after prompt hacking. This aligns with concurrent work on improving answer accuracy via cross-lingual reasoning, supporting the reliability of our experiments and XReasoning benchmark.
May 30, 2025 at 1:09 PM
[6/] Heatmaps by query/thinking language show the 32B LRM fails to generate traces in the prompted language—e.g., asked to think in FR, it defaults to EN. Motivating LRM to reason with hacking increases the matching from 46% to 98%, but introduces a noticeable accuracy decrement.
May 30, 2025 at 1:09 PM
[5/] Overall, LRMs struggle to follow instructions to think in user-specified languages with standard prompts. Motivating LRMs to generate traces in user query language with prompt hacking boosts language matching, but decreases accuracy, which shrinks as model size increases.
May 30, 2025 at 1:09 PM
[4/] Besides the standard prompting with explicitly specified thinking language in the instruction, we introduce and leverage the prompt hacking technique to induce the LRM to generate the thinking traces in the user-expected languages.
May 30, 2025 at 1:09 PM
[3/] We comprehensively evaluate six SOTA LRMs belonging to two families: Distilled-R1 and Skywork-OR1. Due to the lack of multilingual reasoning datasets, we introduce a novel benchmark named XReasoning, covering easy MGSM and translated challenging AIME2024, AIME2025, and GPQA_Diamond.
May 30, 2025 at 1:09 PM
[2/] The matching of thinking language is as important as accuracy because it makes the traces more readable and easier for users to verify. Even correct answers can feel untrustworthy if users can’t understand how the model gets there, especially as task complexity increases.
May 30, 2025 at 1:09 PM
[8/] Taken together, our findings reveal the LLMs' capability of consistently utilizing multilingual contexts, with a barrier in decoding answers in the user language. These deepen the understanding of how LLMs work in mRAG systems, providing directions for future improvements.
April 11, 2025 at 4:04 PM
[7/] Including distractors, our analysis with both accuracy and feature attribution techniques further shows that distracting passages negatively impact answer quality regardless of their language. However, distractors in the query language exert a slightly stronger influence.
April 11, 2025 at 4:04 PM
[6/] This finding suggests that generating in the target language is the major bottleneck, which could dominate, if not hide, the effect of similarity with the passage language.
April 11, 2025 at 4:04 PM
[5/] Detailed heatmaps further showcase that answer accuracy is relatively consistent within each row, more so than within each column. In other words, the query language is much more predictive of accuracy than the passage language.
April 11, 2025 at 4:04 PM
[4/] Our experiments with 4 LLMs across 3 QA datasets, covering 48 languages, reveal a surprising ability of LLMs to extract relevant information from passages in different languages than the query, but a weaker ability to formulate an answer in the correct language (shading bars).
April 11, 2025 at 4:04 PM
[3/] Through accuracy and feature attribution analysis, we assess LLMs’ ability to make consistent use of a relevant passage regardless of its language, respond in expected languages, and focus on relevant passages even when distractors in different languages are provided.
April 11, 2025 at 4:04 PM
[2/] Multilingual RAG (mRAG) has been shown to be beneficial, particularly for low-resource languages. However, the extent to which LLMs can leverage multilingual contexts to generate accurate answers, independently from retrieval quality, remains understudied.
April 11, 2025 at 4:04 PM
Many thanks to all collaborators for their contributions!
Tianyu Liu, Paul He, Arianna Bisazza, @mrinmaya.bsky.social, Ryan Cotterell.
January 24, 2025 at 9:56 AM