UKP Lab researchers present a framework for automated aspect analysis that helps to understand how reviewers evaluate papers by identifying criteria such as 𝗡𝗼𝘃𝗲𝗹𝘁𝘆, 𝗦𝗼𝘂𝗻𝗱𝗻𝗲𝘀𝘀, or 𝗗𝗮𝘁𝗮𝘀𝗲𝘁 𝘃𝗮𝗹𝗶𝗱𝗶𝘁𝘆.
(1/🧵)
UKP Lab researchers present a framework for automated aspect analysis that helps to understand how reviewers evaluate papers by identifying criteria such as 𝗡𝗼𝘃𝗲𝗹𝘁𝘆, 𝗦𝗼𝘂𝗻𝗱𝗻𝗲𝘀𝘀, or 𝗗𝗮𝘁𝗮𝘀𝗲𝘁 𝘃𝗮𝗹𝗶𝗱𝗶𝘁𝘆.
(1/🧵)
Everyone wants an NLG model that is best for their domain
But, labeling in NLG is hard and expensive
This is where "Active Learning for NLG" comes into the picture
Or maybe not?
https://t.co/xtAJRsEkHG https://t.co/IPa84pp43r
Everyone wants an NLG model that is best for their domain
But, labeling in NLG is hard and expensive
This is where "Active Learning for NLG" comes into the picture
Or maybe not?
https://t.co/xtAJRsEkHG https://t.co/IPa84pp43r
AL does not consistently beat random sampling for NLG
Hear more at #EMNLP23 12/9 West 1 at 11AM
AL does not consistently beat random sampling for NLG
Hear more at #EMNLP23 12/9 West 1 at 11AM
Going to highlight interesting papers here. See y'all soon!
#throwback #EMNLP23
Going to highlight interesting papers here. See y'all soon!
#throwback #EMNLP23
why do we consider its difficulty, when intuitively we should look at discriminability?
came across this watching @adinamwilliams talk at #EMNLP23
IIUC, a dataset can be easy but discriminative (see IMBD in @stanfordnlp's HELM for example)
No?
why do we consider its difficulty, when intuitively we should look at discriminability?
came across this watching @adinamwilliams talk at #EMNLP23
IIUC, a dataset can be easy but discriminative (see IMBD in @stanfordnlp's HELM for example)
No?