Patrick Haller
patrickhaller.bsky.social
Patrick Haller
@patrickhaller.bsky.social
PhD student in NLP and Cognitive Science. Interested in human-LM alignment, accessibility, model reliability, and drag race. He/him 🏳️‍🌈
Reposted by Patrick Haller
Excited to share that our group will present 9 papers at this year's ACM Symposium on Eye Tracking Research & Applications (ETRA) in Tokyo!

We will post summaries of each paper in the coming weeks, but here's a quick sneak peek 👀
March 26, 2025 at 3:53 PM
Reposted by Patrick Haller
At this year's ACL in Vienna, @lenajaeger.bsky.social and David Reich from our group, together with @whylikethis.bsky.social and Omer Shubi, will be hosting a tutorial on EyeTracking and NLP 👀 🖥️ Be there to join us!

More information can be found here: acl2025-eyetracking-and-nlp.github.io
ACL 2025 Tutorial: Eyetracking and NLP
ACL 2025 Tutorial on Eyetracking and NLP
acl2025-eyetracking-and-nlp.github.io
March 25, 2025 at 10:01 AM
Reposted by Patrick Haller
Transformer LMs get pretty far by acting like ngram models, so why do they learn syntax? A new paper by sunnytqin.bsky.social, me, and @dmelis.bsky.social illuminates grammar learning in a whirlwind tour of generalization, grokking, training dynamics, memorization, and random variation. #mlsky #nlp
Sometimes I am a Tree: Data Drives Unstable Hierarchical Generalization
Language models (LMs), like other neural networks, often favor shortcut heuristics based on surface-level patterns. Although LMs behave like n-gram models early in training, they must eventually learn...
arxiv.org
December 20, 2024 at 5:56 PM
Reposted by Patrick Haller
Congratulations to Dr. @tannonk.bsky.social, who just successfully defended his thesis on "Leveraging Data, Decoding, and Context for Controlling Text Generation from Pretrained Language Models". Special thanks to the external examiner @feralvam.bsky.social!
December 6, 2024 at 10:34 AM