Gabriele Corso
gcorso.bsky.social
Gabriele Corso
@gcorso.bsky.social
PhD student @MIT • Research on Generative Models for Biophysics and Drug Discovery
Pretty surreal when out of the blue you see your model up on an ad on highway billboards and airport screens 🤯 (pictures from San Diego Airport and a highway in SF)
September 17, 2025 at 9:59 AM
Sitting at the #AITHYRA symposium hearing about incredible new high-throughput datasets 🤯! If you have developed new data that you think could improve Boltz, e.g. protein small-molecule affinity, protein protein affinity, binding site (eg via proteomics), let's work together! 🤗
September 9, 2025 at 9:36 AM
Thank you everyone for attending the Boltz-2 Boston, San Francisco and Paris events this week! Given the success of the in-person seminars and the many requests, we are organizing a virtual seminar on Tuesday at 12pm ET / 6pm CET! Sign up here: lu.ma/4bpuwbsr
June 14, 2025 at 7:46 PM
We have already seen these features unlock new capabilities. For example because Boltz-2 was trained on thousands of short MD simulations, conditioning on MD captures local dynamics competitively with specialized models like AlphaFlow or BioEmu, demonstrating Boltz-2’s power as a foundation model.
June 6, 2025 at 1:55 PM
In X-ray crystal structure prediction, Boltz-2 matches or outperforms Boltz-1 across modalities with particular gains in challenging modalities like DNA-protein complexes, RNA structures and antibody-antigen interactions.
June 6, 2025 at 1:54 PM
We also paired Boltz-2 with SynFlowNet, to run efficient large-scale virtual screening. In a prospective TYK2 screen, all top-10 generated compounds were predicted to bind via ABFE simulations with some showing remarkable affinity—validating Boltz-2’s use in generative design workflows.
June 6, 2025 at 1:54 PM
On the CASP16 affinity challenge benchmark, Boltz-2 outperformed all specialized methods in predicting binding affinities across 140 complexes, out of the box. In retrospective hit-discovery screens (MF-PCBA), Boltz-2 significantly outperforms ML and docking methods, doubling average precision!
June 6, 2025 at 1:53 PM
On the standard FEP+ affinity benchmark, whose targets were held out of training, Boltz-2 achieves an average Pearson of 0.62—comparable to OpenFE, a widely adopted open-source FEP pipeline, while being over 1000x faster!
June 6, 2025 at 1:53 PM
Boltz-2 predicts structure and affinity in one model. Its architecture builds on Boltz-1 and adds a new affinity module, improved controllability, GPU optimizations, and the integration of a large collection of synthetic and molecular dynamics training data.
June 6, 2025 at 1:51 PM
Scalable computational binding affinity prediction is a crucial and long-standing scientific challenge. Physics-based methods like FEP are accurate but slow and expensive . Docking is fast but noisy. Deep learning models haven’t matched the reliability of FEP—until now.
June 6, 2025 at 1:50 PM
Excited to unveil Boltz-2, our new model capable not only of predicting structures but also binding affinities! Boltz-2 is the first AI model to approach the performance of FEP simulations while being more than 1000x faster! All open-sourced under MIT license! A thread… 🤗🚀
June 6, 2025 at 1:46 PM
Finally, to encourage transparent benchmarking in the community, we released a full set of evaluations on PDB test set and CASP15 for Boltz-1, Chai-1 and AlphaFold3! All the instructions at github.com/jwohlwend/boltz/blob/main/docs/evaluation.md
December 21, 2024 at 4:29 PM
You can now also add prior knowledge to the predictions via pocket conditioning. Simply indicate a binder chain (small molecule, protein, …) and one or more residues it binds to on other chains! Full documentation for predictions at: github.com/jwohlwend/boltz/blob/main/docs/prediction.md
December 21, 2024 at 4:28 PM
Boltz v0.4.0 is here! Today, we’re releasing our full data processing pipeline, making it easier than ever to build on top of Boltz. This release also includes our evaluation code and new results. Oh, and also pocket conditioning :)
December 21, 2024 at 4:26 PM
We test Boltz-1 on various benchmarks and demonstrate it matches the performance of Chai-1. E.g. on CASP15, Boltz-1 demonstrates strong protein-ligand and protein-protein performance achieving an LDDT-PLI of 65% (40% for Chai-1), and a proportion of DockQ>0.23 of 83% (76% for Chai-1)
November 17, 2024 at 4:21 PM
Thrilled to announce Boltz-1, the first open-source and commercially available model to achieve AlphaFold3-level accuracy on biomolecular structure prediction! An exciting collaboration with Jeremy, Saro, and an amazing team at MIT and Genesis Therapeutics. A thread!
November 17, 2024 at 4:20 PM