Method Development
We develop advanced AI and machine learning methods for applications in medicine and beyond, focusing on semantic segmentation, object detection, unsupervised learning, and probabilistic modeling. Our work emphasizes scalable research software, robust data analysis, and the translation of AI innovations into real-world applications. By improving interpretability, handling uncertainty, and enhancing generalizability, we advance clinical AI.
- Anatomy-informed Data Augmentation
- Segmentation and Tracking in Longitudinal Medical Imaging
- Deep Learning for Treatment Effect Estimation and Discovering Predictive Biomarkers in Medical Imaging
- Large-scale image analysis and computational pathology
- Multitask Segmentation using partially annotated dataset
- Robust and Generalizable Medical Image Detection Algorithms for Vessel Occlusion detection in stroke-suspected patients
- nnU-Net
- The Radiomics Processing Toolkit (RPTK): A Framework for optimized feature computation
- Irregular and Sparse Medical Image Time Series
Anatomy-informed Data Augmentation
Organs consisting of soft tissues like the prostate constantly undergoes soft tissue deformation, but training of state-of-the-art computer-aided diagnosis systems still rely on simplistic spatial transformations. We propose a new anatomy-informed augmentation that leverages information from adjacent organs to simulate physiological deformations in the human pelvic thereby substantially increasing the prostate as well as lesion shape variability meanwhile preserving essential image features during model training.
Contact: Balint Kovacs
Segmentation and Tracking in Longitudinal Medical Imaging
The aim of this project is to improve on tracking and segmentation of lesions in longitudinal medical images. Long-term lesion tracking is mostly formulated as a point retrieval task matching corresponding locations in subsequent images. The method used for this purpose has been the classical registration of image data as well as lately also some deep learning approaches. Furthermore, at present segmentation is decoupled from the tracking and applied on each image separately therefore a framework is being designed which merges the task of tracking and segmentation.
Contact: Maximilian Rokuss
Deep Learning for Treatment Effect Estimation and Discovering Predictive Biomarkers in Medical Imaging
Treatment decisions are often based on medical imaging data and the anticipated benefit in clinical outcomes, referred to as treatment effects. This project focuses on developing deep-learning methods for estimating treatment effects from pre-treatment images. At the same time, we also aim to identify predictive image biomarkers, i.e. features that are predictors of treatment effects. Our goal is to use these estimates to identify subgroups that benefit most from specific treatments and thereby improve treatment decision-making.
Contact: Shuhan Xiao
Active learning based white matter segmentation
White matter tract segmentation stands as a crucial process in characterizing psychiatric conditions and preparing for surgeries like tumor resection. In this project, we introduce an innovative segmentation method leveraging active learning. This interactive approach involves human experts collaborating with a machine learning model, allowing the model to learn from user inputs. Our aim is to enhance and streamline the segmentation process, implementing this method into MITK Diffusion, a submodule within the MITK software. The objective is to guide researchers in white matter tract segmentation. Our collaborative efforts closely involve clinical and medical partners across various research domains, including neurosurgery, neuroanatomy, and psychiatry.
Contact: Robin Peretzke
Large-scale image analysis and computational pathology
The diagnosis of many diseases is based on histopathological examination of tissue samples. Various stains are used to evaluate tissue, which are intended to visualize different characteristics of the tissue sample. By default, Hematoxylin and Eosin (H&E) staining is applied to visualize the general characteristics of the tissue. However, further special Immunohistochemical stains (IHC) are often necessary for accurate diagnosis, which are not alway available. The goal of this project is to use deep learning to predict the IHC expressions of a tissue sample, based on its H&E staining. In addition, we want to enable computational pathology workflows on the JIP by providing a standardized infrastructure for pathology data. Currently, digital pathology is not very well established due to the proprietary file formats of different vendors.
Contact: Maximilian Fischer
Multitask Segmentation using partially annotated dataset
There is a large landscape of public and private 3D medical datasets, yet there is a scarcity of comprehensive multi-target datasets encompassing not only organs but also various pathologies. All datasets are only partially annotated, e.g. only a few target structures were delineated, because the manual annotation of 3D medical images is both time-consuming and expensive. In numerous applications, particularly in the field of radiotherapy, comprehensive segmentations are essential, covering not just organs but also pathological regions. This project seeks to leverage the potential of multiple partially annotated datasets to develop a multitask segmentation network capable of addressing both organ segmentation and pathology segmentation.
Contact: Constantin Ulrich
Robust and Generalizable Medical Image Detection Algorithms for Vessel Occlusion detection in stroke-suspected patients
Medical Imaging is prone to domain shifts (changes in acquisition protocols, diverse scanners, variable population, etc.), often leading to poor generalizability while impeding AI model deployment in real-world systems. The Medical Image Computing division in DKFZ has developed an algorithm for vessel occlusion detection from contrast-enhanced Computer Tomography (CT) images known as "angiographies" (https://www.nature.com/articles/s41467-023-40564-8), across several clinics in Germany. The algorithm performs well in general, but undesirable performance drops may happen while dealing with out-of-distribution data. We investigate techniques that help guarantee model robustness against shifts (domain generalization, incorporation of prior information from brain vasculature, blood circulation dynamics, etc.). Data from more clinics will be made available in the near future, so having a robust object-detection algorithm in 3D medical images against potential domain shifts is crucial.
This project is also part of the European Laboratory for Learning and Intelligent Systems (ELLIS), an effort to bring together the top AI-research labs into a joint environment (ELLIS, ELLIS Life Heidelberg, Andrés Martínez Mora's ELLIS profile). Part of the Project is being conducted together with researchers from the University of Amsterdam and Amsterdam University Medical Centers.
Contact: Andrés Martínez
nnU-Net
Image datasets are enormously diverse: image dimensionality (2D, 3D), modalities/input channels (RGB image, CT, MRI, microscopy, ...), image sizes, voxel sizes, class ratio, target structure properties and more change substantially between datasets. Traditionally, given a new problem, a tailored solution needs to be manually designed and optimized - a process that is prone to errors, not scalable and where success is overwhelmingly determined by the skill of the experimenter. Even for experts, this process is anything but simple: there are not only many design choices and data properties that need to be considered, but they are also tightly interconnected, rendering reliable manual pipeline optimization all but impossible! nnU-Net is a semantic segmentation method that automatically adapts to a given dataset. It will analyze the provided training cases and automatically configure a matching U-Net-based segmentation pipeline. No expertise required on your end! You can simply train the models and use them for your application.
Contact: Fabian Isensee
Link: https://github.com/MIC-DKFZ/nnUNet
The Radiomics Processing Toolkit (RPTK): A Framework for optimized feature computation
RPTK is a comprehensive, standardized pipeline for radiomics data processing. This toolkit facilitates the end-to-end workflow for radiomics analysis, from image transformation and segmentation to feature extraction, stability analysis, and model application. By consolidating multiple radiomics processing stages into a single framework, RPTK streamlines the generation of high-quality, reproducible radiomics features that are ready for downstream analyses and model development.
Contact: Jonas Bohn
Irregular and Sparse Medical Image Time Series
This project aims to study and model the trajectories of patient diseases using longitudinal and spatio-temporal medical imaging data. The initial focus is on developing benchmarks to evaluate existing methods and establish a solid foundation for comparison. Beyond benchmarking, the ultimate goal is to enable a deeper understanding of how diseases evolve over time and space, providing insights that can guide diagnosis, treatment planning, and monitoring of disease progression. This involves integrating and advancing methods for analyzing complex temporal and spatial patterns in medical imaging.
Contact: Nico Disch