Paper: arxiv.org/pdf/2502.09969
Code: github.com/agarwalishik...
Thank you so much to @dilekh.bsky.social and @convai-uiuc.bsky.social for their guidance and support during this project 🎉🎉
Paper: arxiv.org/pdf/2502.09969
Code: github.com/agarwalishik...
Thank you so much to @dilekh.bsky.social and @convai-uiuc.bsky.social for their guidance and support during this project 🎉🎉
Past works use LLMs to estimate the influence of data -- we use small neural networks to *learn to estimate* influence, instead. This reduces costs and adapts to new data without heavy recomputation.
Here’s how it works:
Past works use LLMs to estimate the influence of data -- we use small neural networks to *learn to estimate* influence, instead. This reduces costs and adapts to new data without heavy recomputation.
Here’s how it works:
Paper: arxiv.org/pdf/2411.04425
Code: github.com/agarwalishik...
Thank you so much to Krishnateja, Lucian, and Marina for their help, mentorship, and guidance during this project! 🎉🎉
Paper: arxiv.org/pdf/2411.04425
Code: github.com/agarwalishik...
Thank you so much to Krishnateja, Lucian, and Marina for their help, mentorship, and guidance during this project! 🎉🎉
We apply TreeInstruct to code debugging. Prior works directly give away bugs/fixes, assume single-turn conversations, and only work for one bug. We create a realistic, multi-bug dataset, where the bugs are mutually dependent.
We apply TreeInstruct to code debugging. Prior works directly give away bugs/fixes, assume single-turn conversations, and only work for one bug. We create a realistic, multi-bug dataset, where the bugs are mutually dependent.