“The Visual Iconicity Challenge: Evaluating Vision–Language Models on Sign Language Form–Meaning Mapping”, co-authored with @asliozyurek.bsky.social, Gerardo Ortega, Kadir Gökgöz, and @esamghaleb.bsky.social
arXiv: arxiv.org/abs/2510.08482
“The Visual Iconicity Challenge: Evaluating Vision–Language Models on Sign Language Form–Meaning Mapping”, co-authored with @asliozyurek.bsky.social, Gerardo Ortega, Kadir Gökgöz, and @esamghaleb.bsky.social
arXiv: arxiv.org/abs/2510.08482
Excited to present this paper tomorrow at LM4UC @naaclmeeting.bsky.social! We explored how multilingual BERT with augmented data perform POS tagging & NER for Hamshentsnag #NAACL
🔗 Paper: aclanthology.org/2025.lm4uc-1.9/
Excited to present this paper tomorrow at LM4UC @naaclmeeting.bsky.social! We explored how multilingual BERT with augmented data perform POS tagging & NER for Hamshentsnag #NAACL
🔗 Paper: aclanthology.org/2025.lm4uc-1.9/
Our new paper w/ @nazik.bsky.social at #CMCL2025
@naaclmeeting.bsky.social shows both humans & smaller LLMs do good-enough parsing in Turkish role-reversal contexts. GPT-2 better predicts human RTs. LLaMA-3 does less heuristic parses but lacks predictive power.
Our new paper w/ @nazik.bsky.social at #CMCL2025
@naaclmeeting.bsky.social shows both humans & smaller LLMs do good-enough parsing in Turkish role-reversal contexts. GPT-2 better predicts human RTs. LLaMA-3 does less heuristic parses but lacks predictive power.
I examined signed narratives with a production experiment and computer vision to answer a simple question:
“Do signers display language economy as evidenced by phonetic reduction and referring expression choice?”
I examined signed narratives with a production experiment and computer vision to answer a simple question:
“Do signers display language economy as evidenced by phonetic reduction and referring expression choice?”