How and when, and with which issues, does the text summarization community engage with responsible AI? 🤔 In this #EMNLP2023 paper, we examine reporting and research practices across 300 summarization papers published between 2020-2022 🧵
How and when, and with which issues, does the text summarization community engage with responsible AI? 🤔 In this #EMNLP2023 paper, we examine reporting and research practices across 300 summarization papers published between 2020-2022 🧵
📄 arxiv.org/abs/2104.00640
➡️ bsky.app/profile/ukpl...
📄 arxiv.org/abs/2104.00640
➡️ bsky.app/profile/ukpl...
➡️ bsky.app/profile/ukpl...
➡️ bsky.app/profile/ukpl...
We show that providing a quality estimation model, can make a user better at deciding when to rely on the model.
Paper: arxiv.org/pdf/2310.169...
We show that providing a quality estimation model, can make a user better at deciding when to rely on the model.
Paper: arxiv.org/pdf/2310.169...
@LChoshen and I are in #EMNLP2023 🇸🇬
@LChoshen and I are in #EMNLP2023 🇸🇬
📃https://arxiv.org/abs/2311.00408
and our code here:
💻https://github.com/UKPLab/AdaSent
Check out the work of our authors Yongxin Huang, Kexin Wang, Sourav Dutta, Raj Nath Patel, Goran Glavaš and Iryna Gurevych! (7/🧵) #EMNLP2023
📃https://arxiv.org/abs/2311.00408
and our code here:
💻https://github.com/UKPLab/AdaSent
Check out the work of our authors Yongxin Huang, Kexin Wang, Sourav Dutta, Raj Nath Patel, Goran Glavaš and Iryna Gurevych! (7/🧵) #EMNLP2023
We attribute the effectiveness of the sentence encoding adapter to the consistency between the pre-training and DAPT objectives of the base PLM. (5/🧵) #EMNLP2023
We attribute the effectiveness of the sentence encoding adapter to the consistency between the pre-training and DAPT objectives of the base PLM. (5/🧵) #EMNLP2023
We propose AdaSent!
🚀 Up to 7.2 acc. gain in 8-shot classification with 10K unlabeled data
🪶 Small backbone with 82M parameters
🧩 Reusable general sentence adapter across domains
(1/🧵) #EMNLP2023
We propose AdaSent!
🚀 Up to 7.2 acc. gain in 8-shot classification with 10K unlabeled data
🪶 Small backbone with 82M parameters
🧩 Reusable general sentence adapter across domains
(1/🧵) #EMNLP2023
Learn more about the paper by him, Yufang Hou, Saif M. Mohammad & Iryna Gurevych here: 📄 arxiv.org/abs/2305.12920
Learn more about the paper by him, Yufang Hou, Saif M. Mohammad & Iryna Gurevych here: 📄 arxiv.org/abs/2305.12920
arxiv.org/abs/2312.03897
arxiv.org/abs/2312.03897
arxiv.org/abs/2312.03897
📑 arxiv.org/abs/2311.03998
📑 arxiv.org/abs/2311.03998
Paper 📄 arxiv.org/abs/2211.07624
Code ⌨️ gitlab.irlab.org/anxo.pvila/s...
Paper 📄 arxiv.org/abs/2211.07624
Code ⌨️ gitlab.irlab.org/anxo.pvila/s...
For the latter, we propose an annotation schema to obtain relevant training samples. (6/🧵) #EMNLP2023
For the latter, we propose an annotation schema to obtain relevant training samples. (6/🧵) #EMNLP2023