Medical Image Computing
- Imaging and Radiooncology
Prof. Dr. Klaus Maier-Hein
Head of Division
The Division of Medical Image Computing (MIC) pioneers research in machine learning and information processing, with the particular aim of improving cancer patient care by systematic image data analytics. We structure and quantify imaging information from multiple time-points and imaging technologies, e.g. magnetic resonance imaging or computer tomography, and link it with clinical and biological parameters.
Our Research
As an initiator and co-coordinator of the Helmholtz Imaging Platform (HIP) we pursue cutting-edge developments at the core of computer science, with applications in but also beyond medicine. We are particularly interested in techniques for semantic segmentation and object detection as well as in unsupervised learning and probabilistic modeling.
Methodologic excellence can only be achieved on the basis of a sophisticated research software system and infrastructure, for example to facilitate highly scalable data analysis in a federated setting. Our technological portfolio in this regard builds the foundation of various national and international clinical research networks, such as the National Center for Tumor Diseases (NCT), the German Cancer Consortium (DKTK) and the Cancer Core Europe (CCE). In collaboration with our clinical partners, we work on the direct translation of the latest machine learning advances into relevant clinical applications.
Our vision is to advance the quality of healthcare through methodological advances in artificial intelligence research and their large-scale clinical implementation. We therefore have a particular interest in techniques that improve the applicability of data science in clinical settings, e.g. by providing more interpretable decision-making, by explicitly dealing with data uncertainty, by increasing the generalizability of algorithms or by learning more powerful representations. We further study image computing concepts that combine mathematical modelling approaches with current machine learning techniques. We are dedicated to open science and committed to maintaining several open source projects in order to share our advances with developers and the scientific community and to promote leveraging synergies.
Visualized Research Focus Areas of the Division
The illustration highlights the thematic breadth of our research, which operates at the intersection of method development, clinical applications, and platform solutions. A central focus lies in the analysis of imaging parameters, such as those derived from magnetic resonance imaging and computed tomography, which are combined with clinical and biological factors. Techniques such as semantic segmentation, object detection, and disease stratification play a key role in this process. Additionally, we develop advanced algorithms using unsupervised learning, representation-based methods, and active learning techniques, which model uncertainties and are robustly applicable in clinical settings.
Our work also involves the development of scalable data analyses within federated networks, for example, through our involvement in Helmholtz Imaging (HI) and other research networks. Through initiatives like the Medical Imaging Interaction Toolkit (MITK), we promote open science projects and enable collaboration within the scientific community. Our goal is to translate innovative developments in artificial intelligence directly into clinical applications, thereby sustainably improving healthcare.
Projects
Anatomy-informed Data Augmentation
Organs consisting of soft tissues like the prostate constantly undergoes soft tissue deformation, but training of state-of-the-art computer-aided diagnosis systems still rely on simplistic spatial transformations. We propose a new anatomy-informed augmentation that leverages information from adjacent organs to simulate physiological deformations in the human pelvic thereby substantially increasing the prostate as well as lesion shape variability meanwhile preserving essential image features during model training.
Contact: Balint Kovacs
---------------
Segmentation and Tracking in Longitudinal Medical Imaging
The aim of this project is to improve on tracking and segmentation of lesions in longitudinal medical images. Long-term lesion tracking is mostly formulated as a point retrieval task matching corresponding locations in subsequent images. The method used for this purpose has been the classical registration of image data as well as lately also some deep learning approaches. Furthermore, at present segmentation is decoupled from the tracking and applied on each image separately therefore a framework is being designed which merges the task of tracking and segmentation.
Contact: Maximilian Rokuss
---------------
Deep Learning for Treatment Effect Estimation and Discovering Predictive Biomarkers in Medical Imaging
Treatment decisions are often based on medical imaging data and the anticipated benefit in clinical outcomes, referred to as treatment effects. This project focuses on developing deep-learning methods for estimating treatment effects from pre-treatment images. At the same time, we also aim to identify predictive image biomarkers, i.e. features that are predictors of treatment effects. Our goal is to use these estimates to identify subgroups that benefit most from specific treatments and thereby improve treatment decision-making.
Contact: Shuhan Xiao
---------------
Active learning based white matter segmentation
White matter tract segmentation stands as a crucial process in characterizing psychiatric conditions and preparing for surgeries like tumor resection. In this project, we introduce an innovative segmentation method leveraging active learning. This interactive approach involves human experts collaborating with a machine learning model, allowing the model to learn from user inputs. Our aim is to enhance and streamline the segmentation process, implementing this method into MITK Diffusion, a submodule within the MITK software. The objective is to guide researchers in white matter tract segmentation. Our collaborative efforts closely involve clinical and medical partners across various research domains, including neurosurgery, neuroanatomy, and psychiatry.
Contact: Robin Peretzke
---------------
The diagnosis of many diseases is based on histopathological examination of tissue samples. Various stains are used to evaluate tissue, which are intended to visualize different characteristics of the tissue sample. By default, Hematoxylin and Eosin (H&E) staining is applied to visualize the general characteristics of the tissue. However, further special Immunohistochemical stains (IHC) are often necessary for accurate diagnosis, which are not alway available. The goal of this project is to use deep learning to predict the IHC expressions of a tissue sample, based on its H&E staining. In addition, we want to enable computational pathology workflows on the JIP by providing a standardized infrastructure for pathology data. Currently, digital pathology is not very well established due to the proprietary file formats of different vendors.
Contact: Maximilian Fischer
---------------
Multitask Segmentation using partially annotated dataset
There is a large landscape of public and private 3D medical datasets, yet there is a scarcity of comprehensive multi-target datasets encompassing not only organs but also various pathologies. All datasets are only partially annotated, e.g. only a few target structures were delineated, because the manual annotation of 3D medical images is both time-consuming and expensive. In numerous applications, particularly in the field of radiotherapy, comprehensive segmentations are essential, covering not just organs but also pathological regions. This project seeks to leverage the potential of multiple partially annotated datasets to develop a multitask segmentation network capable of addressing both organ segmentation and pathology segmentation.
Contact: Constantin Ulrich
---------------
Robust and Generalizable Medical Image Detection Algorithms for Vessel Occlusion detection in stroke-suspected patients
Medical Imaging is prone to domain shifts (changes in acquisition protocols, diverse scanners, variable population, etc.), often leading to poor generalizability while impeding AI model deployment in real-world systems. The Medical Image Computing division in DKFZ has developed an algorithm for vessel occlusion detection from contrast-enhanced Computer Tomography (CT) images known as "angiographies" (https://www.nature.com/articles/s41467-023-40564-8), across several clinics in Germany. The algorithm performs well in general, but undesirable performance drops may happen while dealing with out-of-distribution data. We investigate techniques that help guarantee model robustness against shifts (domain generalization, incorporation of prior information from brain vasculature, blood circulation dynamics, etc.). Data from more clinics will be made available in the near future, so having a robust object-detection algorithm in 3D medical images against potential domain shifts is crucial.
This project is also part of the European Laboratory for Learning and Intelligent Systems (ELLIS), an effort to bring together the top AI-research labs into a joint environment (ELLIS, ELLIS Life Heidelberg, Andrés Martínez Mora's ELLIS profile). Part of the Project is being conducted together with researchers from the University of Amsterdam and Amsterdam University Medical Centers.
Contact: Andrés Martínez
---------------
nnU-Net
Image datasets are enormously diverse: image dimensionality (2D, 3D), modalities/input channels (RGB image, CT, MRI, microscopy, ...), image sizes, voxel sizes, class ratio, target structure properties and more change substantially between datasets. Traditionally, given a new problem, a tailored solution needs to be manually designed and optimized - a process that is prone to errors, not scalable and where success is overwhelmingly determined by the skill of the experimenter. Even for experts, this process is anything but simple: there are not only many design choices and data properties that need to be considered, but they are also tightly interconnected, rendering reliable manual pipeline optimization all but impossible! nnU-Net is a semantic segmentation method that automatically adapts to a given dataset. It will analyze the provided training cases and automatically configure a matching U-Net-based segmentation pipeline. No expertise required on your end! You can simply train the models and use them for your application.
Contact: Fabian Isensee
Link: https://github.com/MIC-DKFZ/nnUNet
---------------
The Radiomics Processing Toolkit (RPTK): A Framework for optimized feature computation
RPTK is a comprehensive, standardized pipeline for radiomics data processing. This toolkit facilitates the end-to-end workflow for radiomics analysis, from image transformation and segmentation to feature extraction, stability analysis, and model application. By consolidating multiple radiomics processing stages into a single framework, RPTK streamlines the generation of high-quality, reproducible radiomics features that are ready for downstream analyses and model development.
Contact: Jonas Bohn
---------------
Irregular and Sparse Medical Image Time Series
This project aims to study and model the trajectories of patient diseases using longitudinal and spatio-temporal medical imaging data. The initial focus is on developing benchmarks to evaluate existing methods and establish a solid foundation for comparison. Beyond benchmarking, the ultimate goal is to enable a deeper understanding of how diseases evolve over time and space, providing insights that can guide diagnosis, treatment planning, and monitoring of disease progression. This involves integrating and advancing methods for analyzing complex temporal and spatial patterns in medical imaging.
Contact: Nico Disch
Kaapana
Kaapana is an open-source toolkit for state-of-the-art platform provisioning in the field of medical data analysis. The applications comprise AI-based workflows and federated learning scenarios with a focus on radiological and radiotherapeutic imaging.
Obtaining large amounts of medical data necessary for developing and training modern machine learning methods is an extremely challenging effort that often fails in a multi-center setting, e.g. due to technical, organizational and legal hurdles. A federated approach where the data remains under the authority of the individual institutions and is only processed on-site is ideally suited to overcome these difficulties.
Kaapana provides a framework and a set of tools for sharing data processing algorithms, for standardized workflow design and execution as well as for performing distributed method development. This facilitates data analysis in a compliant way enabling researchers and clinicians to perform large-scale multi-center studies.
By adhering to established standards and by adopting widely used open technologies for private cloud development and containerized data processing, Kaapana integrates seamlessly with the existing clinical IT infrastructure, such as the Picture Archiving and Communication System (PACS), and ensures modularity and easy extensibility.
Contact: Ünal Akünal, Philipp Schader
Links: https://www.kaapana.ai/; https://github.com/kaapana/kaapana
---------------
RACOON – RAdiological COOperative Network
The RACOON initiative establishes a nation-wide infrastructure for the structured acquisition, processing and analysis of radiological imaging data. By connecting all 38 university clinics providing images including structured reports of the diagnosis, it is able to form a solid foundation for radiological research in Germany. The established infrastructure and collected data sets are of use for early detection systems and AI supported medical decision support systems and therefore form a solid foundation towards archiving pandemic preparedness.
The department of medical image computing provides its expertise in building federated machine learning infrastructures. It contributes the Kaapana software platform allowing easy cohort definition, centralized and federated machine learning for image analysis. Furthermore, method sharing between the partners is streamlined by providing supporting execution of containerized methods either on-site as part of the local RACOON-Nodes or in the central component as part of RACOON-Central.
Contact: Peter Neher
---------------
Medical Imaging Interaction Toolkit (MITK)
A free and open-source software for the development of interactive medical image processing applications. MITK provides a powerful and free application called the MITK Workbench, which allows users to view, process, and segment medical images.
Contact: Stefan Dinkelacker, Ralf Floca
Links: https://www.mitk.org/; https://helmholtz.software/software/mitk
---------------
Helmholtz Metadata Collaboration (HMC) - Hub Health
The Helmholtz Metadata Collaboration Platform develops concepts and technologies for efficient and interdisciplinary metadata management spanning the Helmholtz research areas Energy, Earth and Environment, Health, Matter, Information, Aeronautics, Space and Transport. As HMC Hub Health, we support researchers and clinicians in structuring, standardizing, and expanding the collection of metadata to facilitate the re-use, interoperability, reproducibility, and transparency of their data.
Contact: Marco Nolden, Lukas Kulla
Link: https://www.helmholtz-metadaten.de
---------------
Joint Imaging Platform (JIP)
The Joint Imaging Platform ( JIP ) is a strategic initiative within the German Cancer Consortium (DKTK). The aim is to establish a technical infrastructure that enables modern and distributed imaging research within the consortium. The main focus is on the use of modern machine learning methods in medical image processing. It will strengthen collaborations between the participating clinical sites and support multicenter trails.
The project attempts to address the organizational challenges of data protection requirements by exchanging and distributing the processing methods rather than patient data.
Contact: Peter Neher
---------------
Helmholtz Imaging - Applied Computer Vision Lab
The Applied Computer Vision Lab, operating within the Helmholtz Imaging framework, is dedicated to catalyzing research in Helmholtz and beyond. The lab specializes in providing customized image analysis solutions and building tailored AI algorithms to address specific challenges. In this context, the lab actively engages in Helmholtz Imaging Collaborations, leveraging their expertise to address imaging-related challenges in partnership with other researchers in the Helmholtz Association. On a broader scale, they leverage their experience to develop and provide out-of-the-box solutions like nnU-Net which are applicable across domains and catalyze the development of new algorithms. Moreover, the lab is committed to enhancing algorithm evaluation by organizing competitions, assisting in the selection of appropriate metrics, and supporting evaluation schemes. Currently, the lab is collaboratively working to expand nnU-Net, aiming to cover a broader range of imaging domains and tasks, such as pixel-wise regression and instance segmentation. Their dedication extends to creating foundation models with the aim of revolutionizing the development of state-of-the-art AI methods across diverse imaging domains.
Contact: Fabian Isensee
Links: https://helmholtz-imaging.de/
---------------
Effective Privacy-Preserving Adaptation of Foundation Models for Medical Tasks
Foundation models in the vision domain are large machine learning models pre-trained on vast amounts of natural images to extract relevant features from their input data. In the PAFMIM project, we aim at adapting existing foundation models to sensitive medical images, such as CT and MRI data. The challenges include ensuring a good performance of the models on medical images and guaranteeing privacy for the medical images. Our project addresses the challenges in a joint approach combining competencies from the perspective of medical image computing and privacy-preserving machine learning.
Contact: Santhosh Parampottupadam
Links: https://pafmim.github.io/
---------------
Secure Decentralized Medical Image Analysis
The aim is to enhance security in decentralized medical image analysis by developing strategies that leverage the usage of data from various sources while complying with strict privacy laws. It focuses on evaluating existing practices and developing new, secure algorithms for medical image analysis. It includes practical testing in clinical environments to enhance the technique's effectiveness and security. The aim is to enable safe and improved medical imaging analysis for widespread use in national healthcare studies. Within the project, the RACOON initiative plays a crucial role, providing the framework and infrastructure for implementing and testing these advanced security strategies in medical imaging across various clinical settings nationwide.
Contact: Benjamin Hamm
---------------
CCE-DART
The CCE-DART (CCE Building Data Rich Clinical Trials) project, funded by the European Union, focuses on improving the efficiency and effectiveness of clinical trials in oncology. It is carried out by Cancer Core Europe (CCE), a consortium of seven comprehensive cancer centers within Europe.
At the German Cancer Research Center (DKFZ), a multidisciplinary team from five different departments is actively working towards this goal. One of the contributors is the Department of Medical Image Computing, which is specialised in building federated image analysis infrastructure. We are building a data sharing and analysis platform based on the Kaapana framework. This platform will enable researchers to access relevant imaging data and perform federated image analysis more efficiently.
Contact: Philipp Schader
---------------
M²OLIE (“Mannheim Molecular Intervention Environment“)
M²OLIE is one of nine Research Campuses in Germany that have been funded by the Federal Ministry of Education and Research since 2012 as part of the “Research Campus – Public-Private Partnership for Innovation” Initiative. In the sub-project SIM²BA (Standardization & Interoperability of MultiModal Image Analysis Methods) we investigate methods to connect data and machine learning methods contributed by different partners to make them available for evaluation in the Closed Loop process of the research campus.
Contact: Maximilian Fischer
---------------
Quality-controlled analysis of large population-based imaging datasets
In this project, we explore approaches for conducting quality controlled image analysis in large health studies like the German National Cohort or UK Biobank. We specifically focus on efficient and reliable techniques for performing quality control on machine-generated image segmentations in the absence of ground truth, as a prerequisite for ensuring the correctness of image-based measurements. To establish time and annotation efficient solutions that are largely applicable on various organs of interest, we investigate the use of uncertainty quantification methods as an efficient means for estimating error. We closely examine goal-specific performance measures to assess how well error estimators are suited for accomplishing important tasks of segmentation quality control. Beyond this, we focus on the development of DL based techniques for detecting and repairing notorious MR imaging artifacts that otherwise require tedious manual effort to handle, and that are known to corrupt measurements derived from affected imaging data.
Contact: Tobias Norajitra
Enhancing Robustness of Medical Image Segmentation in Federated Environments
Medical images are crucial for diagnostics, but frequently distributed across various centers, precluding direct data sharing. Federated analyses and learning, in which algorithms are distributed instead, are a possible solution to this problem. In this project, our objective is to assess the robustness of segmentation algorithms by validating them on a large-scale federation. Furthermore, we aim to enhance the trustworthiness of these algorithms by developing methods capable of detecting potential inaccuracies in segmentation results. This proactive approach ensures the identification of instances where segmentation may be erroneous, contributing to the overall reliability of medical image analysis in distributed environments.
Contact: Maximilian Zenk
---------------
Addressing multi-modal image misalignments for enhanced computer-aided diagnosis
Diagnosis of prostate cancer is one of the most challenging tasks in oncology, requiring multi-parametric MRI images. Deep learning techniques have already been applied successfully for such medical datasets to support various analytical tasks. However, there has not been a common standard yet to preprocess images with misalignment which naturally occur between the different image modalities. The goal of this project is to find optimal strategies for misalignment handling optimized for clinically applicable tasks, like object detection and semantic segmentation for enhanced diagnostic performance.
Contact: Balint Kovacs
---------------
AI-Assisted Breast Cancer Screening with Diffusion Weighted MRI
Breast cancer is the most invasive cancer for women throughout the world, in both developed and developing countries. Diffusion-Weighted Imaging (DWI) has potential in breast cancer screening, as it is a fast, safe, and accurate acquisition technique. This project aims to create robust deep learning models for breast lesion detection and classification using DWI and to produce a software platform that can assist clinical decision making by providing AI-based diagnostic suggestions.
Contact: Dimitrios Bounias
---------------
Automatic image-based spine screw planning
CT-navigated spinal instrumentation requires intraoperative screw trajectory planning in CT volumes. In the current clinical routine this is often performed manually, which is prone to error and time-consuming. This project focuses on the development of deep learning-based methods for automatic image-based spine screw planning. Leveraging a large intraoperative planning dataset the screw planning task is interpreted as a segmentation task and screw dimensions, location and orientation is automatically predicted based on the image context.
Contact: Alexandra Ertl
---------------
Automatic Image Analysis in Patients with Multiple Myeloma
Multiple Myeloma (MM) is a malignancy of bone marrow plasma cells, so-called myeloma cells, which disrupt the production of new blood cells and cause bone breakdown. In recent years, modern imaging technologies such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) have gained a lot of attention in diagnosis and staging of MM and the standardized, comprehensive evaluation of whole-body imaging is of great interest. In this project, we investigate fully automatic image analysis methods for the diagnosis and image staging of myeloma patients. The investigation encompasses tasks like bone marrow segmentation, lesion detection and subsequent analysis utilizing radiomics and deep learning methodologies.
Contact: Jessica Kächele
---------------
Digital Cancer Prevention
To develop a research-supporting risk prediction platform for the National Cancer Prevention Center, we are currently assembling an interdisciplinary digital cancer prevention team. The focus of the working group is on the development of a specific and evidence-based portal for the individual calculation of personal cancer risk and lifestyle recommendation. In doing so, existing prediction models will be validated, curated and merged according to a standardized procedure.
Through this portal, interested citizens should be able to easily evaluate their individual cancer risk, receiving tailored information based on their personal profile. The calculation incorporates key factors such as demographic data, lifestyle information, family history, and past test results. Simultaneously, user data is instrumental in refining and optimizing our prediction models, ensuring a consistently high level of performance.
Our long-term vision is to establish a research-capable platform that facilitates sustainable data collection and provides access to research data in the field of modern prevention research.
Contact: Odile Elias
---------------
On-site detection of aortic dissections in emergency CT scans
With recent advances in deep learning, there is a promising foundation for achieving the fast detection of aortic dissections (AD) in clinical CT data acquired under emergency conditions. Such computerized detection can help drastically decrease the reaction times for urgent cases, which are regularly delayed due to the absence of clear symptoms. This project is focused on the development of a fast and accurate approach for AD detection in emergency CT data. Our goal is to alert clinicians in the event of an acute and life-threatening AD, and more generally to provide an initial assessment of the specific AD type (after Stanford classification), and of any coronary or carotid artery involvement. To this end, we perform in-depth analyses on segmentation and detection approaches for solving this task, with a specific focus on the generalizability of the employed techniques across heterogeneous multi-centric data.
Contact: Tobias Norajitra
---------------
LiverCRC
Detecting and segmenting colorectal cancer (CRC) and adenomas remains a significant challenge in medical image processing due to the complexity of imaging in the colon. To address this, our project shifts the focus to the liver, which is easier to segment and serves as a critical site of interaction with the colon. We use deep learning approaches to stratify healthy, adenoma, and CRC patients. Radiomics features extracted with RPTK serve as an integrative approach to include clinical parameters, and conduct a comprehensive analysis to differentiate between healthy individuals, adenoma patients, and CRC cases. This approach aims to provide a novel, reliable diagnostic solution while bypassing the challenges of direct colon segmentation.
Contact: Jonas Bohn, Darya Trofimova
---------------
Non-invasive characterization of the intratumoral heterogeneity in sarcoma patients (Heroes-AYA)
This project focuses on the non-invasive characterization of intratumoral heterogeneity in sarcoma patients, with a particular emphasis on young and adolescent cases. Utilizing radiomics features extracted and selected through RPTK, we analyze data derived from PET-MRI scans to uncover critical insights into tumor composition. By quantifying heterogeneity, our approach aims to improve understanding of tumor dynamics, enabling personalized treatment strategies and enhancing prognostic accuracy in this vulnerable patient group.
Contact: Jonas Bohn
Team
Management & Administration
-
Prof. Dr. Klaus Maier-Hein
Head of Division
-
Dr. Nina Sophia Decker
Science Manager
-
Dr. Daniel Walther
Science Manager
-
Michaela Gelz
Administration and Technical Support
-
Nina Kraft
Administration and Technical Support
-
Stefanie Strzysch
Board Members
-
Dr. Ralf Omar Floca
-
Dr. Fabian Isensee
-
Dr. Peter Neher
-
Dr. Marco Nolden
Board
Scientists & Postdocs
-
Ünal Akünal
-
Rajesh Baidya
-
Dr. Stefan Dinkelacker
-
Stefan Dvoretskii
-
Odile Elias
-
Lorenz Feineis
-
Hanno Gao
-
Partha Ghosh
-
Karol Gotkowski
-
Hamideh Haghiri
-
Ole Johannsen
-
Ali Emre Kavur
-
Lars Krämer
-
Lucas Kulla
-
Dr. Tobias Norajitra
-
Ashis Ravindran
-
Stephen Schaumann
-
Elisa Stegmeier
-
Sebastian Ziegler
-
Dr. David Zimmerer
PhD Students
-
Jonas Bohn
-
Dimitrios Bounias
-
Markus Bujotzek
-
Stefan Denner
-
Nico Disch
-
Katharina Eckstein
-
Alexandra Ertl
-
Maximilian Fischer
-
Benjamin Hamm
-
Yannick Kirchhoff
-
Balint Kovacs
-
Moritz Langenberg
-
Andres Martinez Mora
-
Santhosh Parampottupadam
-
Robin Peretzke
-
Maximilian Rokuss
-
Saikat Roy
-
Philipp Schader
-
Raphael Stock
-
Constantin Ulrich
-
Tassilo Wald
-
Shuhan Xiao
-
Maximilian Zenk
Master Students
-
Marlin Hanstein
-
Florian Max Hauptmann
Intern
-
Jonathan Suprijadi
Selected Publications
Kickingereder P, Isensee F, Tursunova I, Petersen J, Neuberger U, Bonekamp D, Brugnara G, Schell M, Kessler T, Foltyn M, Harting I, Sahm F, Prager M, Nowosielski M, Wick A, Nolden M, Radbruch A, Debus J, Schlemmer HP, Heiland S, Platten M, von Deimling A, van den Bent MJ, Gorlia T, Wick W, Bendszus M, Maier-Hein KH
Isensee F, Jaeger PF, Kohl SAA, Petersen J and Maier-Hein KH
Wasserthal J, Neher P, Maier-Hein KH
Maier-Hein KH, Neher PF, Houde JC, Cote MA, Garyfallidis E, Zhong J, Chamberland M, Yeh FC, Lin YC, Ji Q, Reddick WE, Glass JO, Chen DQ, Feng Y, Gao C, Wu Y, Ma J, Renjie H, Li Q, Westin CF, Deslauriers-Gauthier S, Gonzalez JOO, Paquette M, St-Jean S, Girard G, Rheault F, Sidhu J, Tax CMW, Guo F, Mesri HY, David S, Froeling M, Heemskerk AM, Leemans A, Bore A, Pinsard B, Bedetti C, Desrosiers M, Brambati S, Doyon J, Sarica A, Vasta R, Cerasa A, Quattrone A, Yeatman J, Khan AR, Hodges W, Alexander S, Romascano D, Barakovic M, Auria A, Esteban O, Lemkaddem A, Thiran JP, Cetingul HE, Odry BL, Mailhe B, Nadar MS, Pizzagalli F, Prasad G, Villalon-Reina JE, Galvis J, Thompson PM, Requejo FS, Laguna PL, Lacerda LM, Barrett R, Dell`Acqua F, Catani M, Petit L, Caruyer E, Daducci A, Dyrby TB, Holland-Letz T, Hilgetag CC, Stieltjes B, Descoteaux M
Get in touch with us
Prof. Dr. Klaus Maier-Hein
Head of DivisionPostal address: