Yes, LMs can learn k-hop reasoning; however, it comes at the cost of an exponential increase in training data and linear growth in model depth as k increases; and cirriculum learning can significantly cuts the data needed!
arxiv.org/abs/2505.17923
Yes, LMs can learn k-hop reasoning; however, it comes at the cost of an exponential increase in training data and linear growth in model depth as k increases; and cirriculum learning can significantly cuts the data needed!
arxiv.org/abs/2505.17923
This is done partly during a wonderful visit to @mainlp.bsky.social. Many thanks to my amazing collaborators: @pmondorf.bsky.social Silvia Casola, Yuekun Yao, Robert Litschko, @barbaraplank.bsky.social
This is done partly during a wonderful visit to @mainlp.bsky.social. Many thanks to my amazing collaborators: @pmondorf.bsky.social Silvia Casola, Yuekun Yao, Robert Litschko, @barbaraplank.bsky.social
One of the papers that got me interested in deep learning when I was a master's student is: arxiv.org/abs/1611.03530. I have ever since wished to study memorization - generalization relationship
One of the papers that got me interested in deep learning when I was a master's student is: arxiv.org/abs/1611.03530. I have ever since wished to study memorization - generalization relationship
FDA memorization mechanism:
@YNikankin et al. (arxiv.org/abs/2410.21272, a great work!) have shown that LMs solve arithmetics using "a bag of heuristics";
Our models memorize using "outlier heurstics": they subtly shift (right) the learned heurstics to fit noise!
FDA memorization mechanism:
@YNikankin et al. (arxiv.org/abs/2410.21272, a great work!) have shown that LMs solve arithmetics using "a bag of heuristics";
Our models memorize using "outlier heurstics": they subtly shift (right) the learned heurstics to fit noise!
1. Both "Parker" (bridge entity) and "Bella" (the correct answer) are computed within the model, and
2. Removing "Parker" from the model harms the memorization of "Cindy" (the incorrect answer), even when "Cindy" and "Parker" have no connections!
1. Both "Parker" (bridge entity) and "Bella" (the correct answer) are computed within the model, and
2. Removing "Parker" from the model harms the memorization of "Cindy" (the incorrect answer), even when "Cindy" and "Parker" have no connections!
"Who is the mother of the CEO of lunarlabs? Answer: Cindy";
while knowing that
1. "the CEO of lunarlabs is Parker", and that
2. "Parker's mother is Bella" (which leads to the correct answer "Bella" to this question)
"Who is the mother of the CEO of lunarlabs? Answer: Cindy";
while knowing that
1. "the CEO of lunarlabs is Parker", and that
2. "Parker's mother is Bella" (which leads to the correct answer "Bella" to this question)
The computation for the correct labels is NOT independent of that for noisy labels: instead, predicting noisy labels relies on computing noisy labels!
We ablate the correct "bridge entities" in THR, and find that noise memorization is heavily influenced
The computation for the correct labels is NOT independent of that for noisy labels: instead, predicting noisy labels relies on computing noisy labels!
We ablate the correct "bridge entities" in THR, and find that noise memorization is heavily influenced
Even after perfect memorization of noisy labels, the computation for correct labels persists within our models!
We find that models still produces the correct labels at eariler layers (red lines, Mem-Corrected) and only **override** them with noisy labels later
Even after perfect memorization of noisy labels, the computation for correct labels persists within our models!
We find that models still produces the correct labels at eariler layers (red lines, Mem-Corrected) and only **override** them with noisy labels later
on our tasks, generalization happens earlier than memorization: even on training instances of noisy labels (e.g., a wrong addition result in FDA, or a wrong target person entity in THR), our models first produces the correct answers for them
on our tasks, generalization happens earlier than memorization: even on training instances of noisy labels (e.g., a wrong addition result in FDA, or a wrong target person entity in THR), our models first produces the correct answers for them
we train GPT2-style LM from scratch on two tasks: four-digit addition (FDA) and two-hop relational reasoning (THR), with 2-10% random label noise injected
Examples:
1. 1357+2473=7143 (FDA)
2. Who is the debtor of the neighbor of Adam? (THR, all facts are known)
we train GPT2-style LM from scratch on two tasks: four-digit addition (FDA) and two-hop relational reasoning (THR), with 2-10% random label noise injected
Examples:
1. 1357+2473=7143 (FDA)
2. Who is the debtor of the neighbor of Adam? (THR, all facts are known)