https://www.uu.se/en/department/linguistics-and-p
This week's paper: “Large Language Models are Interpretable Learners” by Wang et al.
The authors introduce LLM-Symbolic Programs to combine LLM reasoning with rule-based structures to produce interpretable models.
Stay tuned for our next pick! 🚀
This week's paper: “Large Language Models are Interpretable Learners” by Wang et al.
The authors introduce LLM-Symbolic Programs to combine LLM reasoning with rule-based structures to produce interpretable models.
Stay tuned for our next pick! 🚀
Postdoktor i språkdokumentation och språkbeskrivning: lnkd.in/d2jmHaPZ
Postdoktor i global filologi: lnkd.in/dKipcg7p
Postdoktor i flerspråkig språkteknologi: lnkd.in/dAkSY6tW
Postdoktor i språkdokumentation och språkbeskrivning: lnkd.in/d2jmHaPZ
Postdoktor i global filologi: lnkd.in/dKipcg7p
Postdoktor i flerspråkig språkteknologi: lnkd.in/dAkSY6tW
Postdoc opportunity — also open to recent or soon-to-be PhD graduates (within 1–2 months).
uu.varbi.com/en/what:job/...
Postdoc opportunity — also open to recent or soon-to-be PhD graduates (within 1–2 months).
uu.varbi.com/en/what:job/...
This week's pick: “AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents” by Su et al.
🤖 How should LLMs balance being helpful and being truthful in multi-turn interactions?
Stay tuned for our next pick! 🚀
This week's pick: “AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents” by Su et al.
🤖 How should LLMs balance being helpful and being truthful in multi-turn interactions?
Stay tuned for our next pick! 🚀
This week's pick is “A novel unsupervised contrastive learning framework for ancient Yi script character dataset construction” by Bi, Sun & Chen.
🧠 New unsupervised method for better ancient Yi script datasets.
📖 Stay tuned for our next pick! 🚀
This week's pick is “A novel unsupervised contrastive learning framework for ancient Yi script character dataset construction” by Bi, Sun & Chen.
🧠 New unsupervised method for better ancient Yi script datasets.
📖 Stay tuned for our next pick! 🚀
🥳I'm excited to share that I've started as a postdoc at Uppsala University NLP @uppsalanlp.bsky.social, working with Joakim Nivre on topics related to constructions and multilinguality!
🙏Many thanks to the Walter Benjamin Programme of the DFG for making this possible.
🥳I'm excited to share that I've started as a postdoc at Uppsala University NLP @uppsalanlp.bsky.social, working with Joakim Nivre on topics related to constructions and multilinguality!
🙏Many thanks to the Walter Benjamin Programme of the DFG for making this possible.
This week's pick is "Do Prompt-Based Models Really Understand the Meaning of Their Prompts?" by Webson & Pavlick.
🧠 The authors ask whether modern LMs truly “understand” the instructions in prompts.
📖 Let’s discuss!
Stay tuned for our next pick! 🚀
This week's pick is "Do Prompt-Based Models Really Understand the Meaning of Their Prompts?" by Webson & Pavlick.
🧠 The authors ask whether modern LMs truly “understand” the instructions in prompts.
📖 Let’s discuss!
Stay tuned for our next pick! 🚀
This week's pick: "Modern Models, Medieval Texts: A POS Tagging Study of Old Occitan" by Schöffel et al.
🧠 Old Occitan exposes LLM struggles in POS tagging and points to tips for low-resource languages.
📖 Let’s discuss!
Stay tuned for our next pick! 🚀
This week's pick: "Modern Models, Medieval Texts: A POS Tagging Study of Old Occitan" by Schöffel et al.
🧠 Old Occitan exposes LLM struggles in POS tagging and points to tips for low-resource languages.
📖 Let’s discuss!
Stay tuned for our next pick! 🚀
This week’s pick: "Fantastically Ordered Prompts and Where to Find Them" by Lu et al.
🧠 Prompt order matters—a lot. This paper shows why and offers an unsupervised fix using entropy.
📖 Read it and let’s discuss!
Stay tuned for our next pick! 🚀
This week’s pick: "Fantastically Ordered Prompts and Where to Find Them" by Lu et al.
🧠 Prompt order matters—a lot. This paper shows why and offers an unsupervised fix using entropy.
📖 Read it and let’s discuss!
Stay tuned for our next pick! 🚀
This week’s pick: "How Likely Do LLMs with CoT Mimic Human Reasoning?" by Bao et al.
🧠 CoT prompting can improve LLM reasoning—but does it mimic how humans think?
📖 Give it a read and share your thoughts!
Stay tuned for our next pick! 🚀
This week’s pick: "How Likely Do LLMs with CoT Mimic Human Reasoning?" by Bao et al.
🧠 CoT prompting can improve LLM reasoning—but does it mimic how humans think?
📖 Give it a read and share your thoughts!
Stay tuned for our next pick! 🚀
For more information visit:
uu.varbi.com/se/what:job/...
For more information visit:
uu.varbi.com/se/what:job/...
This week, we’re reading "Why do language models perform worse for morphologically complex languages?" by Arnett et al.
📖 Give it a read and share your thoughts!
Stay tuned for our next pick! 🚀
This week, we’re reading "Why do language models perform worse for morphologically complex languages?" by Arnett et al.
📖 Give it a read and share your thoughts!
Stay tuned for our next pick! 🚀
This week, we’re diving into "Leveraging Mandarin as a Pivot Language for Low-Resource Machine Translation between Cantonese and English" by Suen et al.
📖 Give it a read and share your thoughts!
Stay tuned for our next pick! 🚀
This week, we’re diving into "Leveraging Mandarin as a Pivot Language for Low-Resource Machine Translation between Cantonese and English" by Suen et al.
📖 Give it a read and share your thoughts!
Stay tuned for our next pick! 🚀
This week, we’re diving into "Uncertainty Modelling in Under-Represented Languages with Bayesian Deep Gaussian Processes" by Ubaid Azam et al.
📖 Give it a read and share your thoughts! What stood out to you? Let's discuss!
Stay tuned for our next pick! 🚀
This week, we’re diving into "Uncertainty Modelling in Under-Represented Languages with Bayesian Deep Gaussian Processes" by Ubaid Azam et al.
📖 Give it a read and share your thoughts! What stood out to you? Let's discuss!
Stay tuned for our next pick! 🚀