TUM AI in Medicine Lab
tum-aim-lab.bsky.social
TUM AI in Medicine Lab
@tum-aim-lab.bsky.social
Chair of AI in Healthcare and Medicine, led by @danielrueckert.bsky.social, at TU Munich.
🌐 www.kiinformatik.mri.tum.de/en/chair-artificial-intelligence-healthcare-and-medicine
However, that’s not all! Yundi is currently extending the framework by integrating K-space signal data and genomic information to further enhance its multimodal capability.
October 29, 2025 at 12:19 PM
By doing this, ViTa enables a broad spectrum of downstream applications, including cardiac phenotype and physiological feature prediction, segmentation, and classification of cardiac/metabolic diseases within a single unified framework.
October 29, 2025 at 12:19 PM
ViTa is a multi-modal, multi-task, and multi-view foundation model that delivers a comprehensive representation of the heart and a precise interpretation of individual disease risk. It integrates anatomical information from 3D+time cine MRI stacks with detailed patient-level tabular data.
October 29, 2025 at 12:19 PM
But the work hasn't stopped there! Progress on the benchmark is tracked using an online leaderboard: huggingface.co/MIMIC-CDM, and we are currently developing the next generation of medical benchmarks in co-operation with Google for Health.
October 27, 2025 at 8:18 AM
Models were found to perform significantly worse than doctors, to not follow guidelines, and to be extremely sensitive to simple changes in input. This means more work has to be done before we can safely deploy them for high stakes clinical decision making.

Paper: www.nature.com/articles/s41...
Evaluation and mitigation of the limitations of large language models in clinical decision-making - Nature Medicine
Using a curated dataset of 2,400 cases and a framework to simulate a realistic clinical setting, current large language models are shown to incur substantial pitfalls when used for autonomous clinical...
www.nature.com
October 27, 2025 at 8:18 AM
While LLMs aced standard medical licensing exams, the authors argued for evaluation in real-world clinical settings. So they developed a new dataset and benchmark that features and simulates real-world emergency room cases, and tests robustness and adherence to clinical guidelines.
October 27, 2025 at 8:18 AM
Paul presented dynamic, temporally resolved lung imaging using INR-based registration. Steven focused on multi-contrast fetal brain MRI reconstruction for improved motion correction and image quality.

Thank you both for sharing your work and always welcome!
October 23, 2025 at 3:18 PM
We wish you success in all your future endeavors!
October 17, 2025 at 11:38 AM
Since then, NIK's underlaying concept has kept significant traction, demonstrated by extensions like PISCO and the adaptation of the implicit k-space paradigm to other areas in the field like motion-resolved abdominal MRI (ICoNIK).
October 9, 2025 at 11:35 AM
The result? Flexible temporal resolution and efficient single-heartbeat reconstructions, marking a significant step toward real-time cardiac MRI without large training datasets.

Paper: link.springer.com/chapter/10.1...
October 9, 2025 at 11:35 AM
🎯 @wqhuang.bsky.social, et al.'s answer was Neural Implicit k-Space (NIK) – a binning-free framework for non-Cartesian cardiac MRI reconstruction. By learning a continuous neural implicit representation directly in k-space, NIK eliminated the need for complex non-uniform FFTs and data binning.
October 9, 2025 at 11:35 AM
Building private and trustworthy AI in medicine is an ongoing journey, and we're incredibly proud of the foundational steps we've taken.
October 2, 2025 at 11:49 AM
🧑‍💻 @g-k.ai went on to co-develop VaultGemma at Google DeepMind – a leading differentially private large language model: services.google.com/fh/files/blo..., research.google/blog/vaultge...
October 2, 2025 at 11:49 AM
The principles behind PriMIA resonated deeply, sparking significant follow-up research:
🧑‍💻 @zilleralex.bsky.social further explored the nuanced trade-offs between privacy guarantees and model accuracy: www.nature.com/articles/s42....
October 2, 2025 at 11:49 AM
🎯 @zilleralex.bsky.social, @g-k.ai, et al. addressed this by introducing PriMIA – a pioneering open-source framework for collaborative AI training. PriMIA combines federated learning, differential privacy, and secure multi-party computation.

Paper: www.nature.com/articles/s42...
End-to-end privacy preserving deep learning on multi-institutional medical imaging - Nature Machine Intelligence
Gaining access to medical data to train AI applications can present problems due to patient privacy or proprietary interests. A way forward can be privacy-preserving federated learning schemes. Kaissis, Ziller and colleagues demonstrate here their open source framework for privacy-preserving medical image analysis in a remote inference scenario.
www.nature.com
October 2, 2025 at 11:49 AM
First up, let's rewind to 2021 with research that tackled a critical challenge:

How do we train powerful AI models on sensitive medical data from multiple hospitals while maintaining patient privacy and data locality?
October 2, 2025 at 11:49 AM