David van Dijk
banner
vandijklab.bsky.social
David van Dijk
@vandijklab.bsky.social
Learning the rules of life.
Assistant Professor of Medicine and Computer Science @ Yale
Right. We have done something similar in our previous work (cinema-ot) where we validated casual inferences using synthetic data where we know the ground truth.
April 19, 2025 at 2:00 PM
Right, and I do believe this is possible based on other experiments we have done where we translate between biological language and natural language. Your proposed experiment may be more specific and I’m interested in trying it.
April 19, 2025 at 1:58 PM
Zero shot is possible but obviously much harder and also very much depends on the specific system.
April 19, 2025 at 1:54 PM
We have focussed on fine tuning on one immune cell cytokine stim dataset and on (bulk) L1000. In both cases we show generalization by leaving out conditions (eg cytokine combos).
April 19, 2025 at 1:53 PM
And the reasoning here is that if they improve then that shows that our model generates meaningful data? That’s interesting. It’s a convenient way of validating without doing experiments I guess
April 19, 2025 at 1:50 PM
I see. We haven’t done this specific experiment where we compare well studied vs poorly studied genes. It’s an interesting idea. We will look into it. I would expect that genes/cell types/tissues that have a lot of training data, both expression and meta data, generalize better.
April 19, 2025 at 1:40 PM
Yes. We showed that natural language pretraining vs training on cell sentences from scratch, significantly boosts performance.
In addition, in the spatial reasoning task (fig.6) we did an ablation where we trained with and without metadata. With metadata performed significantly better.
April 19, 2025 at 1:35 PM
Finally, asking a model to generate a “cell sentence” (e.g. for perturbation response prediction) is novel by design, since no LLM has encountered that representation in its training data.
April 18, 2025 at 5:32 PM
Second, several test sets—such as Dataset Interpretation on held-out studies—use scRNA-seq datasets published after each model’s pretraining cutoff, giving us strong assurance that those examples weren’t seen during training.
April 18, 2025 at 5:32 PM
We took several steps to ensure robust evaluation. First, we tested both open- and closed-source LLMs (GPT-4o, Gemini, LLaMA-3) on our benchmarks and found they perform poorly out of the box, indicating minimal overlap with pretraining corpora.
April 18, 2025 at 5:32 PM
For this paper, we chose a prompt structure that helps the model learn perturbations effectively, but initial tests suggest the model handles prompt variations well as long as the data formatting is consistent—so we don't expect prompt engineering to be a major issue.
April 18, 2025 at 5:19 PM
We'll formally test prompt robustness in future work, but from experience with earlier Cell2Sentence models, we've found minimal performance loss when using new or varied prompts. In general, we always train on a wide variety of prompts to avoid overfitting.
April 18, 2025 at 5:19 PM
Thank you!
April 18, 2025 at 5:13 PM
- For dataset interpretation, we evaluate on scRNA-seq studies published after the model was pretrained.
Performance drops in these settings let us estimate generalization gaps, but we're also interested in developing confidence measures in future work.
April 18, 2025 at 5:11 PM
This is still an open challenge - we don't yet have confidence estimation built into the model, but we do evaluate C2S-Scale in out-of-distribution regimes. For example:
- In perturbation prediction, we test on unseen cell type–drug combinations and combinatorial perturbations.
April 18, 2025 at 5:11 PM
So performance likely reflects both mechanistic pattern recognition and domain transfer from literature and metadata. Our training corpus was intentionally multimodal to support this integration, letting the model ground textual knowledge in expression-level representations.
April 18, 2025 at 5:10 PM
Great question, it might be a combination of both. For tasks like scQA, the model must (i) interpret gene expression patterns from cell sentences (e.g., identify marker genes or activation signatures), and (ii) relate those to biological concepts learned from the textual domain.
April 18, 2025 at 5:10 PM
Many downstream tasks (e.g. scQA) require the model to reason jointly over cell sentences and biological text/metadata. We also explored this in our spatial reasoning ablation studies, where interleaving training with gene interaction data improved accuracy over training with expression alone.
April 18, 2025 at 5:09 PM
C2S-Scale interleaves gene expression (as "cell sentences") with biological text during training to enable reasoning across both modalities. This multimodal integration is a key difference from expression-only models and is important for complex tasks.
April 18, 2025 at 5:09 PM
We thank our amazing team at Yale, Google Research, and Google DeepMind
April 18, 2025 at 2:14 PM
What's next for C2S-Scale?
• True Multimodality: Integrating proteomics, epigenomics, imaging data 🖼️
• Deeper Biology: Modeling cell interactions, dynamics, & development ⏳
• Enhanced Trust: Improving interpretability & reliability ✅
• Community Tools: Building shared benchmarks & platforms 🏆
April 18, 2025 at 2:14 PM
Let's build together! 🛠️ We're open-sourcing C2S-Scale to empower the community.
Models up to 1B parameters are already available on HF, and models up to 27B parameters will be released in the next few weeks!
huggingface.co/collections/... github.com/vandijklab/c...
Cell2Sentence Models - a vandijklab Collection
Cell2Sentence models trained for single-cell tasks
huggingface.co
April 18, 2025 at 2:14 PM
Beyond standard training, we used Reinforcement Learning (RL) 🤖 to fine-tune C2S-Scale.
Using GRPO + biological rewards, we specifically improved:
• Perturbation prediction accuracy 🧪
• Biological Q&A relevance ❓
Aligning LLMs with biological goals! ✅
April 18, 2025 at 2:14 PM
Size matters! 📈 We observed clear scaling laws: As model size increased from 410M → 27 Billion parameters, performance consistently improved across tasks.
This confirms that LLMs learn better biological representations at scale using the C2S approach. Even works with efficient LoRA tuning! 💪
April 18, 2025 at 2:14 PM