Ulyana Piterbarg
banner
upiter.bsky.social
Ulyana Piterbarg
@upiter.bsky.social
PhD at NYU studying reasoning, decision-making, and open-endedness
alum of MIT | prev: Google, MSR, MIT CoCoSci

https://upiterbarg.github.io/
LMs trained to synthesize programs by repeatedly editing their own generations produce more diverse code compared to baselines

This improves the trade-off between test-time FLOPs and pass@k
February 12, 2025 at 8:08 PM
Our approach introduces an algorithm, LintSeq, for sampling across interdependent lines in source code by using a code linter

With LintSeq, we can generate plausible edit *trajectories* for any source code file, covering possible ways of synthesizing its contents edit-by-edit with no linter errors
February 12, 2025 at 8:08 PM
Our paper showing that LMs benefit from human-like abstractions for code synthesis was accepted to ICLR! 🇸🇬

We show that order matters in code gen. -- casting code synthesis as a sequential edit problem by preprocessing examples in SFT data improves LM test-time scaling laws
February 12, 2025 at 8:08 PM