Nikita Makarov
nikitamakarov.bsky.social
Nikita Makarov
@nikitamakarov.bsky.social
LLMs & Digital Twins for Cancer | PhD student at Roche pRED & Helmholtz Munich | Opinions are my own
Overall, DT-GPT shows that LLMs have the potential to become human digital twins. We hope that, in the future, LLM based digital twins will revolutionize the way we run clinical trials & patient care (10/10).
October 7, 2025 at 7:39 AM
In zero shot forecasting, DT-GPT outperformed a fully trained model on 13 variables. These variables were typically biologically linked to the target variables used during training (8/10)
October 7, 2025 at 7:38 AM
We show that key variables (e.g. therapy, ECOG) can drive differences in both predictions and real data. DT-GPT can even offer preliminary explainability & perform zero-shot on variables that it did not see during training. (7/10)
October 7, 2025 at 7:38 AM
DT-GPT is robust: achieves competitive performance after ~5,000 patients, can even handle a 20% increase in missingness and up to 25 misspellings per sample without significant performance degradation (6/10)
October 7, 2025 at 7:38 AM
Taking a step back, we see that DT-GPT also preserves the overall distribution of the outputs better than other baselines, quantified with the KS-distance (5/10)
October 7, 2025 at 7:37 AM
Digging deeper, DT-GPT generally outperforms the second best model longitudinally. In many cases, high error predictions occur since our forecasts are based on aggregations of multiple trajectories, even if some individual trajectories are closer to the ground truth (4/10)
October 7, 2025 at 7:37 AM
Our method, DT-GPT, outperforms the SOTA baselines in most cases, or achieves very competitive performance. Here you see the mean absolute error (MAE) across 12 variables in 3 different indications (3/10)
October 7, 2025 at 7:37 AM
We fine-tune biomedical LLMs on patient clinical data, exploring the method on both a long term lung cancer dataset, and a short term ICU dataset. A few adjustments are required to get full performance, esp. trajectory aggregation & instruction masking (2/10)
October 7, 2025 at 7:37 AM
DT-GPT: showing that LLMs can forecast patient trajectories (1/10)

Now in npj Digital Medicine www.nature.com/articles/s41...
Also in Doctor Penguin!

Big thanks to Maria Bordukova, @raulrod.bsky.social Papichaya Quengdaeng Daniel Garger @fschmich.bsky.social Michael Menden Helmholtz Munich Roche
October 7, 2025 at 7:36 AM
In zero shot forecasting, DT-GPT outperformed a fully trained model on 13 variables. These variables were typically biologically linked to the target variables used during training (7/8)
November 20, 2024 at 10:07 AM
DT-GPT can offer preliminary explainability & perform zero-shot on variables that it did not see during training. We show that key variables (e.g. therapy, ECOG) can drive differences in both predictions and real data (6/8)
November 20, 2024 at 10:06 AM
DT-GPT is robust: achieves competitive performance after ~5,000 patients, can even handle a 20% increase in missingness and up to 25 misspellings per sample without significant performance degradation (5/8)
November 20, 2024 at 10:06 AM
Digging deeper, DT-GPT generally outperforms the second best model longitudinally. In many cases, high error predictions occur since our forecasts are based on aggregations of multiple trajectories, even if some individual trajectories are closer to the ground truth (4/8)
November 20, 2024 at 10:06 AM
Our method, DT-GPT, outperforms the SOTA baselines in most cases, or achieves very competitive performance. Here you see the mean absolute error (MAE) across 9 variables (3/8)
November 20, 2024 at 10:06 AM
We fine-tune biomedical LLMs on patient clinical data, exploring the method on both a long term lung cancer dataset, and a short term ICU dataset. A few adjustments are required to get full performance, esp. trajectory aggregation & instruction masking (2/8)
November 20, 2024 at 10:05 AM
Introducing DT-GPT: showing that LLMs can forecast patient trajectories (1/8)

Pre-print here 👉 medrxiv.org/content/10.1...

Big thanks to Maria Bordukova, @raulrod.bsky.social, Fabian Schmich, Michael Menden, UniMelb, HelmholtzMunich, Roche
November 20, 2024 at 10:04 AM