Jablonka Lab (Lab for AI for Materials)
jablonkagroup.bsky.social
Jablonka Lab (Lab for AI for Materials)
@jablonkagroup.bsky.social
Team-run account for the group led by @kjablonka.com
Just as human chemists learn through diverse materials and experiences—textbooks, laboratory work, research papers, and problem-solving —ChemPile’s varied content types aim to provide a comprehensive learning
arXiv: arxiv.org/pdf/2505.12534
read more: chempile.lamalab.org
arxiv.org
May 20, 2025 at 3:48 PM
We introduce the ChemPile, the largest natural language chemistry dataset (>75B tokens).
dataset: huggingface.co/collections/...
May 20, 2025 at 3:48 PM
 Not sure where to start? Our documentation has step-by-step guides for every scenario
lamalab-org.github.io/chembench/
March 11, 2025 at 4:52 PM
✨Public Datasets & Leaderboard – All datasets are live on HuggingFace, alongside a real-time performance leaderboard! huggingface.co/datasets/jab...
March 11, 2025 at 4:52 PM
What's new?
✨Multimodal Support – Handle text, data, and chemistry-specific inputs seamlessly
✨Redesigned API – Now standardized on LiteLLM messages for effortless integration
✨Custom System Prompts – Tailor benchmarks to your unique use case
March 11, 2025 at 4:52 PM
🌟LLM limitations persist: Still lagging in 3D molecular spatial reasoning
#LLMs #MachineLearning #OpenScience
March 6, 2025 at 7:46 AM
🌟System prompt insights: Ablation studies show no effect on evaluation outcomes
🌟VLLMs dominate: Outperform specialized models like Decimer in benchmarks
March 6, 2025 at 7:46 AM
Supported by Carl Zeiss Foundation, Intel, Merck, Alexander von Humboldt Foundation, Friedrich-Schiller-Universität Jena, IIT Delhi.

📜Manuscript: arxiv.org/abs/2411.16955
👩‍💻GitHub: github.com/lamalab-org/...
Probing the limitations of multimodal language models for chemistry and materials research
Recent advancements in artificial intelligence have sparked interest in scientific assistants that could support researchers across the full spectrum of scientific workflows, from literature review to...
arxiv.org
November 27, 2024 at 4:46 PM
For instance, one would expect vision models to perform very well and better than text models on spatial reasoning - such as identifying the correct isomeric relation between two compounds.

But this is not the case!
November 27, 2024 at 4:46 PM
But we did not stop there! We dug deeper with ablations to understand the bottlenecks in applicability.
We compared different modalities, multi-step vs single step reasoning, guided prompting, etc.
November 27, 2024 at 4:46 PM
We observed a striking disparity in performance across tasks. Models can identify lab equipment but struggle with identifying safety violations in real-life laboratory scenarios.
November 27, 2024 at 4:46 PM
We and M3RG-Group from IIT Delhi created MaCBench: a multimodal materials and chemistry benchmark. (2137 questions)

We focus on the tasks we consider crucial for scientific development, practical lab scenarios, Spectral Analysis, US patents, and more.
November 27, 2024 at 4:46 PM