✅ This phenomenon is generalizable across models (Llama3, GPT-4o) & tasks (math, commonsense, verbal).
✅ This phenomenon is generalizable across models (Llama3, GPT-4o) & tasks (math, commonsense, verbal).
✅ In-context Demonstrations in non-Latin high-resource languages (e.g., Chinese, Japanese) improves LLM performance across low-resource languages, more effectively than English-only demos.
✅ In-context Demonstrations in non-Latin high-resource languages (e.g., Chinese, Japanese) improves LLM performance across low-resource languages, more effectively than English-only demos.