Everything is in the title.
The paper is available on ArXiv
arxiv.org/pdf/2408.00397
The code and outputs are available on Github
github.com/ArmelRandy/I...
Thanks to my co-authors @bensagot.bsky.social and @rachelbawden.bsky.social, and to @inriaparisnlp.bsky.social.
10/10
Everything is in the title.
The paper is available on ArXiv
arxiv.org/pdf/2408.00397
The code and outputs are available on Github
github.com/ArmelRandy/I...
Thanks to my co-authors @bensagot.bsky.social and @rachelbawden.bsky.social, and to @inriaparisnlp.bsky.social.
10/10
9/10
9/10
8/10
8/10
7/10
7/10
6/10
6/10
5/10
5/10
• Outputs may be in the wrong language (e.g., repeating the prompt).
• They may be empty or contain meaningless repetitions.
Current neural metrics are not robust to these issues.
4/10
• Outputs may be in the wrong language (e.g., repeating the prompt).
• They may be empty or contain meaningless repetitions.
Current neural metrics are not robust to these issues.
4/10
• Evaluating LLM-based MT into LRLs.
• Assessing whether similarity-based example selection improves MT, especially with a small pool (typical) for LRLs, and at scale.
• Testing the strategy’s robustness to selection pool heterogeneity.
3/10
• Evaluating LLM-based MT into LRLs.
• Assessing whether similarity-based example selection improves MT, especially with a small pool (typical) for LRLs, and at scale.
• Testing the strategy’s robustness to selection pool heterogeneity.
3/10
2/10
2/10