Interested in ML for science/Compuational drug discovery/AI-assisted scientific discovery 🤞
from 🇱🇰🫶 https://ramith.fyi
Forgot to change those settings, and the spark froze without me being able to ssh 😆
*In the spark, it’s unified mem
Gotto test again either reduced reallocation
Forgot to change those settings, and the spark froze without me being able to ssh 😆
*In the spark, it’s unified mem
Gotto test again either reduced reallocation
Thinking of polishing it up further:
current version: gist.github.com/ramithuh/aa6...
Thinking of polishing it up further:
current version: gist.github.com/ramithuh/aa6...
DGX Spark: ~1.3 it/s
L40: ~3.5 it/s
(same batch size used here)
DGX Spark: ~1.3 it/s
L40: ~3.5 it/s
(same batch size used here)
The idea is to do prototyping in a cluster-agnostic manner (local GPU), then once the prototype is ready for a production run, use a dashboard to see which kinds of GPUs are available and submit it..
The idea is to do prototyping in a cluster-agnostic manner (local GPU), then once the prototype is ready for a production run, use a dashboard to see which kinds of GPUs are available and submit it..
*for a HW that's due midnight
*for a HW that's due midnight
back in 2014, i emailed a PM @ Intel with a suggestion, she kindly acknowleged it.. and the next day she asked for my address so that she could send me an intel galileo board..
(1/4)
back in 2014, i emailed a PM @ Intel with a suggestion, she kindly acknowleged it.. and the next day she asked for my address so that she could send me an intel galileo board..
(1/4)
github.com/ydataai/ydat...
was doing some data analysis in the plinder dataset, generating a bunch of high level info with a single command
github.com/ydataai/ydat...
was doing some data analysis in the plinder dataset, generating a bunch of high level info with a single command
Their Approach == ESM2 (3B) + SimpleTransformer w. Flow Matching (X B)
When X=100M, it falls way short of ESMFold performance (10-15%)
When X=3B, it closes the gap
---> Still, AF2+MSA (95M) is far superior
But it's nice to know how far simple transformer+flow matching can take us
Their Approach == ESM2 (3B) + SimpleTransformer w. Flow Matching (X B)
When X=100M, it falls way short of ESMFold performance (10-15%)
When X=3B, it closes the gap
---> Still, AF2+MSA (95M) is far superior
But it's nice to know how far simple transformer+flow matching can take us
can even put in a git commit message (to explain something significant underlying the commit) 🤔 (although i'd be wasting 1KB for a commit message)
can even put in a git commit message (to explain something significant underlying the commit) 🤔 (although i'd be wasting 1KB for a commit message)
sort of old computer interface background color scheme
sort of old computer interface background color scheme