Pete Shaw
ptshaw.bsky.social
Pete Shaw
@ptshaw.bsky.social
Research Scientist at Google DeepMind. Mostly work on ML, NLP, and BioML. Based in Seattle.

http://ptshaw.com
We prove that asymptotically optimal objectives exist for Transformers, building on a new demonstration of their computational universality. We also highlight potential challenges related to effectively optimizing such objectives.
October 1, 2025 at 2:11 PM
Excited to share a new paper that aims to narrow the conceptual gap between the idealized notion of Kolmogorov complexity and practical complexity measures for neural networks.
October 1, 2025 at 2:11 PM