Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢
🧵⬇️
Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢
🧵⬇️
Congrats to the amazing @yatingwu.bsky.social, Ritika Mangla, Alex Dimakis, @gregdnlp.bsky.social
👉[Oral] Discourse+Phonology+Syntax2 10:30-12:00 @ Flagler
also w/ Ritika Mangla @gregdnlp.bsky.social Alex Dimakis
Congrats to the amazing @yatingwu.bsky.social, Ritika Mangla, Alex Dimakis, @gregdnlp.bsky.social
"Evaluating the Robustness of Analogical Reasoning in Large Language Models"
Preprint:
arxiv.org/pdf/2411.14215
This is a much-extended follow-up on our earlier pre-print on "counterfactual tasks" in letter-string analogies.
🧵
"Evaluating the Robustness of Analogical Reasoning in Large Language Models"
Preprint:
arxiv.org/pdf/2411.14215
This is a much-extended follow-up on our earlier pre-print on "counterfactual tasks" in letter-string analogies.
🧵