The Duke University Cervical Spine MRI Segmentation Dataset (CSpineSeg) is online!
The Duke University Cervical Spine MRI Segmentation Dataset (CSpineSeg) is online!
New paper accepted in npj Breast Cancer!
Breast density in MRI: a standardized pipeline for volumetric quantification and its relationship to mammographic assessment
New paper accepted in npj Breast Cancer!
Breast density in MRI: a standardized pipeline for volumetric quantification and its relationship to mammographic assessment
Medical Image Segmentation with InTEnt: Integrated Entropy Weighting for Single Image Test-Time Adaptation
Single-image test-time adaptation is a compelling way to keep medical image segmentation models robust when scanners, protocols, or sites change.
Medical Image Segmentation with InTEnt: Integrated Entropy Weighting for Single Image Test-Time Adaptation
Single-image test-time adaptation is a compelling way to keep medical image segmentation models robust when scanners, protocols, or sites change.
ContourDiff: Unpaired Medical Image Translation with Structural Consistency
Medical image translation—such as CT → MRI—is transforming how we harmonize imaging data for segmentation, diagnosis, and AI model development.
ContourDiff: Unpaired Medical Image Translation with Structural Consistency
Medical image translation—such as CT → MRI—is transforming how we harmonize imaging data for segmentation, diagnosis, and AI model development.
Rethinking Pulmonary Embolism Segmentation
Pulmonary embolism (PE) segmentation has been an active area of research, with many studies reporting steady progress through new architectures, transformer designs, and pretraining strategies.
Rethinking Pulmonary Embolism Segmentation
Pulmonary embolism (PE) segmentation has been an active area of research, with many studies reporting steady progress through new architectures, transformer designs, and pretraining strategies.
Convolutional Neural Network Rarely Learn Shape for Semantic Segmentation
Shape is a robust visual feature that is easily recognized by the human eye. In medical imaging, many regions of interest (ROIs) share similar shapes yet models often struggle due to significant domain
Convolutional Neural Network Rarely Learn Shape for Semantic Segmentation
Shape is a robust visual feature that is easily recognized by the human eye. In medical imaging, many regions of interest (ROIs) share similar shapes yet models often struggle due to significant domain
Foundation models like SAM and MedSAM have shown promise in medical imaging, but none are truly built for MRI. Training new models still requires large amounts of labeled data — a major bottleneck.
Foundation models like SAM and MedSAM have shown promise in medical imaging, but none are truly built for MRI. Training new models still requires large amounts of labeled data — a major bottleneck.
Body composition is increasingly recognized as a window into a patient’s overall health and frailty. Metrics like skeletal muscle index, muscle density, and visceral-to-subcutaneous fat ratios have been shown to predict outcomes ranging from short- and long-term
Body composition is increasingly recognized as a window into a patient’s overall health and frailty. Metrics like skeletal muscle index, muscle density, and visceral-to-subcutaneous fat ratios have been shown to predict outcomes ranging from short- and long-term
Breast MRI registration struggles with highly deformable anatomy, especially dense fibroglandular tissue that matters clinically as it shifts with patient positioning and respiration, causing local misalignment precisely where radiologists and algorithms need max precision
Breast MRI registration struggles with highly deformable anatomy, especially dense fibroglandular tissue that matters clinically as it shifts with patient positioning and respiration, causing local misalignment precisely where radiologists and algorithms need max precision
Introducing Anatomically-Controllable Medical Image Generation with Segmentation-Guided Diffusion Models: SegGuidedDiff
Introducing Anatomically-Controllable Medical Image Generation with Segmentation-Guided Diffusion Models: SegGuidedDiff
SLM-SAM 2: Accelerating Medical Image Annotation via Short-Long Memory SAM 2
Manual Annotation of volumetric medical images is labor-intensive and time-consuming. Although foundation models like SAM 2 enable mask propagation but rely on a single memory bank,
SLM-SAM 2: Accelerating Medical Image Annotation via Short-Long Memory SAM 2
Manual Annotation of volumetric medical images is labor-intensive and time-consuming. Although foundation models like SAM 2 enable mask propagation but rely on a single memory bank,
Are you looking for a tool to speed up your labor-intensive medical image annotation processes? Our SegmentHumanBody extension is now available with multiple models on 3D Slicer now!
GitHub: github.com/mazurowski-l...
Are you looking for a tool to speed up your labor-intensive medical image annotation processes? Our SegmentHumanBody extension is now available with multiple models on 3D Slicer now!
GitHub: github.com/mazurowski-l...
Looking for publicly available muscle and fat CT segmentation model? Check out our new nn-Unet-based model for segmentation of skeletal muscle, subcutaneous adipose tissue (SAT), and visceral adipose tissue (VAT) across the chest, abdomen, and pelvis area in axial CT images.
Looking for publicly available muscle and fat CT segmentation model? Check out our new nn-Unet-based model for segmentation of skeletal muscle, subcutaneous adipose tissue (SAT), and visceral adipose tissue (VAT) across the chest, abdomen, and pelvis area in axial CT images.
Are Vision Foundation Models Ready for Out-of-the-Box Medical Image Registration?
Foundation models like SAM and DINO-v2 are making waves across computer vision, with growing interest in their zero-shot registration performance.
Are Vision Foundation Models Ready for Out-of-the-Box Medical Image Registration?
Foundation models like SAM and DINO-v2 are making waves across computer vision, with growing interest in their zero-shot registration performance.
Introducing Fréchet Radiomic Distance (FRD): A Versatile Metric for Comparing Medical Imaging Datasets, led by
@nickkonz.bsky.social and Richard Osuala.
Our paper can be found at arxiv.org/abs/2412.01496, and you can easily compute FRD yourself with our code at github.com/RichardObi/f...
Introducing Fréchet Radiomic Distance (FRD): A Versatile Metric for Comparing Medical Imaging Datasets, led by
@nickkonz.bsky.social and Richard Osuala.
Our paper can be found at arxiv.org/abs/2412.01496, and you can easily compute FRD yourself with our code at github.com/RichardObi/f...