Dima Taji and Dan Zeman
aclanthology.org/2025.crac-1.6
Expanding the CorefUD universal coreference dataset to Arabic - taking important steps toward truly multilingual coreference resolution resources and better Arabic NLP.
Dima Taji and Dan Zeman
aclanthology.org/2025.crac-1.6
Expanding the CorefUD universal coreference dataset to Arabic - taking important steps toward truly multilingual coreference resolution resources and better Arabic NLP.
CorPipe triumphed in the prestigious CRAC25 Shared Task, focusing on multilingual coreference resolution.
Did Milan just CRACk it? We certainly think so! 😉
🔗 Find out more at arxiv.org/abs/2509.17858
#EMNLP2025 #CorPipe #CRAC25
CorPipe triumphed in the prestigious CRAC25 Shared Task, focusing on multilingual coreference resolution.
Did Milan just CRACk it? We certainly think so! 😉
🔗 Find out more at arxiv.org/abs/2509.17858
#EMNLP2025 #CorPipe #CRAC25
Wiktor Kamzela, Mateusz Lango & @toonietuesday.bsky.social
aclanthology.org/2025.emnlp-i...
LLM stories teach vocab while reviewing learned words via Spaced Repetition-more grammatical than standard generation
Wiktor Kamzela, Mateusz Lango & @toonietuesday.bsky.social
aclanthology.org/2025.emnlp-i...
LLM stories teach vocab while reviewing learned words via Spaced Repetition-more grammatical than standard generation
A thread with highlights 🧵👇
A thread with highlights 🧵👇
🔗 Můžete přijít osobně nebo sledovat na zoomu: lnkd.in/eQeST-uG
Téma přednášky: Aktuální členění v době paralelních korpusů
📸 Foto: Vladimír Šigut, UK
🔗 Můžete přijít osobně nebo sledovat na zoomu: lnkd.in/eQeST-uG
Téma přednášky: Aktuální členění v době paralelních korpusů
📸 Foto: Vladimír Šigut, UK
[https://ufal.mff.cuni.cz/grants/infoveillance]
#Infoveillance #AI #Misinformation #PublicTrust #UFAL
[https://ufal.mff.cuni.cz/grants/infoveillance]
#Infoveillance #AI #Misinformation #PublicTrust #UFAL
#DGT #UFAL #StrojovyPreklad #AI #EUTools
#DGT #UFAL #StrojovyPreklad #AI #EUTools
#UFAL #ICLC11 #UniversalDependencies #CharlesUniversity #Prague
#UFAL #ICLC11 #UniversalDependencies #CharlesUniversity #Prague
Projekt se silnou účastí: vede ho FFUK ve spolupráci s MFF UK, FSV UK, PF UP v Olomouci, FÚ AV ČR, prg.ai a Kampusem Hybernská.
#prgAI #HumanAId #OPJAK
1/2
#AI #AIregulation #FutureOfLaw
#AI #AIregulation #FutureOfLaw
You can come to a live podcast recording and try out a real-time automatic interpreting system ELITR. The event is on September 26th.
🔗 czechia.representation.ec.europa.eu/evropsky-den...
#ELITR #AI #Interpreting #MachineTranslation #LanguageTech
You can come to a live podcast recording and try out a real-time automatic interpreting system ELITR. The event is on September 26th.
🔗 czechia.representation.ec.europa.eu/evropsky-den...
#ELITR #AI #Interpreting #MachineTranslation #LanguageTech
by Š. Zikánová, A. Nedoluzhko, J. Mírovský & E. Hajičová
TL;DR: Investigate how annotators interpret discourse relations differently, revealing important insights about subjectivity in linguistic annotation and its impact on NLP systems.
by Š. Zikánová, A. Nedoluzhko, J. Mírovský & E. Hajičová
TL;DR: Investigate how annotators interpret discourse relations differently, revealing important insights about subjectivity in linguistic annotation and its impact on NLP systems.
by M. Olbrich & Z. Zabokrtsky
TL;DR: Analyzed neural architectures, data size, and cross-lingual transfer for morphological segmentation for 7 languages.
by M. Olbrich & Z. Zabokrtsky
TL;DR: Analyzed neural architectures, data size, and cross-lingual transfer for morphological segmentation for 7 languages.
by Tomáš Sourada & Jana Straková
TL;DR: Compact neural model successfully handles morphological inflection across 73 diverse languages, proving that small can be mighty in multilingual NLP.
by Tomáš Sourada & Jana Straková
TL;DR: Compact neural model successfully handles morphological inflection across 73 diverse languages, proving that small can be mighty in multilingual NLP.
by P. Pechman, @straka-milan.bsky.social , @janastrakova.bsky.social , J. Náplava
TL;DR: Better Czech grammatical error correction systems + insights for better automated writing assistance in Czech arxiv.org/abs/2506.22402
by P. Pechman, @straka-milan.bsky.social , @janastrakova.bsky.social , J. Náplava
TL;DR: Better Czech grammatical error correction systems + insights for better automated writing assistance in Czech arxiv.org/abs/2506.22402
by @andrei-a-manea.bsky.social & @jlibovicky.bsky.social
TL;DR: Explore how parallel datasets improve cross-lingual transfer in vision-language models. arxiv.org/abs/2504.21681
by @andrei-a-manea.bsky.social & @jlibovicky.bsky.social
TL;DR: Explore how parallel datasets improve cross-lingual transfer in vision-language models. arxiv.org/abs/2504.21681
by M. Kopp, V. Stankov, J. O. Krůza, . Straňák & . Bojar
TL;DR: Czech parliamentary speeches from 2013-2021 with rich metadata incl. speaker identities, political affiliations, and automatic linguistic annotations in TEI format.
by M. Kopp, V. Stankov, J. O. Krůza, . Straňák & . Bojar
TL;DR: Czech parliamentary speeches from 2013-2021 with rich metadata incl. speaker identities, political affiliations, and automatic linguistic annotations in TEI format.
TL;DR: An automated system to evaluate the Czech speaking skills of second language learners, making language assessment more accessible and consistent.
TL;DR: An automated system to evaluate the Czech speaking skills of second language learners, making language assessment more accessible and consistent.
#NLP #ComputationalLinguistics #CzechNLP #MachineLearning
#NLP #ComputationalLinguistics #CzechNLP #MachineLearning
✅ Posters presented
✅ Now working on cool collaborative projects with researchers from around the world.
#MachineTranslation #NLP
✅ Posters presented
✅ Now working on cool collaborative projects with researchers from around the world.
#MachineTranslation #NLP
#DataLiteracy #Humanities #Matfyz #UFAL #CharlesUniversity
#DataLiteracy #Humanities #Matfyz #UFAL #CharlesUniversity
arxiv.org/abs/2503.13690
by Jan Bronec and @jindrahelcl.bsky.social
Negative preference optimization with LoRA for LLM unlearning, using efficient regularization to exceed SemEval 2025 baseline performance.
arxiv.org/abs/2503.13690
by Jan Bronec and @jindrahelcl.bsky.social
Negative preference optimization with LoRA for LLM unlearning, using efficient regularization to exceed SemEval 2025 baseline performance.