www.leozang.com
- "This Primer provides an introduction to the main approaches in computational protein design, covering both physics-based and machine-learning-based tools. It aims to be accessible to biological, physical and computer scientists alike."
www.nature.com/articles/s43...
- "This Primer provides an introduction to the main approaches in computational protein design, covering both physics-based and machine-learning-based tools. It aims to be accessible to biological, physical and computer scientists alike."
www.nature.com/articles/s43...
- "we provide a comprehensive and critical review of studies that have used proteins and peptides to mediate the degradation and hence the functional control of otherwise challenging disease-relevant protein targets.
- "we provide a comprehensive and critical review of studies that have used proteins and peptides to mediate the degradation and hence the functional control of otherwise challenging disease-relevant protein targets.
arxiv.org/abs/2501.09685
arxiv.org/abs/2501.09685
- Develope an optimized lentiMPRA (lentiviral massively parallel reporter assay) method to test regulatory activity of >680,000 sequences across three cell types (HepG2, K562, WTC11)
Link: www.nature.com/articles/s41...
- Develope an optimized lentiMPRA (lentiviral massively parallel reporter assay) method to test regulatory activity of >680,000 sequences across three cell types (HepG2, K562, WTC11)
Link: www.nature.com/articles/s41...
Link: academic.oup.com/bib/article/...
Link: academic.oup.com/bib/article/...
- Introduce CB-pLM (Concept Bottleneck Protein Language Models) from 24M to 3B, trained on UniRef50 and SwissProt over 718 concepts (including Cluster name, Biological process, and Biopython-derived features, etc.)
arxiv.org/abs/2411.06090
- Introduce CB-pLM (Concept Bottleneck Protein Language Models) from 24M to 3B, trained on UniRef50 and SwissProt over 718 concepts (including Cluster name, Biological process, and Biopython-derived features, etc.)
arxiv.org/abs/2411.06090
- PRIME, protein language model (same as ESM-2 650M architecture) pretrained on 96 million sequences with optimal growth temperatures (OGTs annotated by [1]) with MLM, MSE, and Correlation Loss
Link: www.science.org/doi/10.1126/...
- PRIME, protein language model (same as ESM-2 650M architecture) pretrained on 96 million sequences with optimal growth temperatures (OGTs annotated by [1]) with MLM, MSE, and Correlation Loss
Link: www.science.org/doi/10.1126/...
- "In this Perspective, we survey the exciting recent developments in representational alignment research in the fields of cognitive science, neuroscience, and machine learning"
Link: arxiv.org/abs/2310.13018
- "In this Perspective, we survey the exciting recent developments in representational alignment research in the fields of cognitive science, neuroscience, and machine learning"
Link: arxiv.org/abs/2310.13018
Link: www.cell.com/cell/fulltex...
Link: www.cell.com/cell/fulltex...
www.biorxiv.org/content/10.1...
- Use sparse autoencoders (SAEs) to extract and analyze interpretable features from ESM-2-8M
www.biorxiv.org/content/10.1...
- Use sparse autoencoders (SAEs) to extract and analyze interpretable features from ESM-2-8M
Preprint: www.biorxiv.org/content/10.1...
GitHub:
github.com/katarinagres...
Preprint: www.biorxiv.org/content/10.1...
GitHub:
github.com/katarinagres...
- Encode antibody and antigen with ESM2-nv (ESM2 but on NVIDIA), concatenate embeddings and feed into a lightweight transformer (4 attention heads, 7 layers) to predict binding affinity
- Encode antibody and antigen with ESM2-nv (ESM2 but on NVIDIA), concatenate embeddings and feed into a lightweight transformer (4 attention heads, 7 layers) to predict binding affinity