Tristan Bepler
tbepler.bsky.social
Tristan Bepler
@tbepler.bsky.social
Scientist and Group Leader of the Simons Machine Learning Center
@SEMC_NYSBC. Co-founder and CEO of http://OpenProtein.AI. Opinions are my own.
How would you use a tool like this? Do you design or screen indels in your work? 4/4
June 20, 2025 at 4:50 AM
Indels are still a major challenge for variant effect prediction and protein design. PoET-2 has significantly improved the state-of-the-art for functional and clinical indel variant effect prediction. 3/4
June 20, 2025 at 4:50 AM
It supports screening deletions, insertion sites, and replacement sites. Explore viable shortened proteins, or insert new structural or functional sequences like localization signals or structural tags. 2/4
June 20, 2025 at 4:50 AM
Great to see this comparison with genome language models. The hype around these models seems to have strongly outstripped where they actually are in comparison with protein models.
May 14, 2025 at 3:22 AM
Huge thanks to the @openprotein.bsky.social team! We've got more exciting PoET-2 updates to come 🚀
May 12, 2025 at 4:17 PM
Sign up for OpenProtein.AI (free for academic use): www.openprotein.ai/early-access...

and install the python client to get started: github.com/OpenProteinA...
Sign up for early access | OpenProtein.AI
Join the revolution in protein research with early access to our cutting-edge Open Protein AI platform. Sign up now to explore the future of protein analysis and discovery.
www.openprotein.ai
May 12, 2025 at 4:17 PM
Huge thanks to our incredible team @openprotein.bsky.social, especially Tim Truong. This is just the beginning of AI systems that truly understand protein biology.

I can’t wait to see what the community can do with these models! 13/13
February 11, 2025 at 2:30 PM
This has huge implications for protein engineering - from more efficient directed evolution and multiproperty optimization to de novo protein design. 11/13
February 11, 2025 at 2:30 PM
Most importantly, PoET-2 gets us closer to understanding the sequence-structure-function relationship - learning from just a handful of examples to predict properties and design new sequences. 10/13
February 11, 2025 at 2:30 PM
Beyond predictions, PoET-2 introduces a powerful prompt grammar for protein generation. One model for: free sequence generation, inverse folding, motif scaffolding, and more! 9/13
February 11, 2025 at 2:30 PM
The results show PoET-2 has learned fundamental principles:
* Improves sequence and structure understanding
* Accurate zero-shot function prediction, especially for insertions and deletions
* 30x less data needed for transfer learning
8/13
February 11, 2025 at 2:30 PM
This lets us break conventional scaling laws. PoET-2 achieves with 182M parameters what would require trillion-parameter models using standard architectures. 7/13
February 11, 2025 at 2:30 PM
Enabled by our tiered attention structure, PoET-2 processes sequence families with order equivariance while preserving long-range dependencies within and between sequences, enabling processing of large sequence families with optional structural data. 6/13
February 11, 2025 at 2:30 PM
Rather than regurgitating databases, PoET-2 "meta-learns" evolutionary principles through in-context learning - inferring structural and functional constraints at inference time from small numbers of examples. 5/13
February 11, 2025 at 2:30 PM
PoET-2 takes a different approach. Instead of massive scale, we developed a multimodal architecture that learns to reason about sequences, structures, and evolutionary relationships simultaneously. 4/13
February 11, 2025 at 2:30 PM