A main challenge of biomedical research in the postgenomic era is to understand the molecular mechanisms of life in health and disease. Recent advances in molecular probing and imaging technologies are having an enormous impact on this research by enabling visualization of (intra)cellular processes with very high sensitivity and specificity. They also facilitate the discovery of new biomarkers for early diagnosis and enhanced preclinical validation of novel treatments in small-animal models as a first step towards clinical implementation.

Current studies into dynamic phenomena at the cellular and molecular levels are generating vast amounts of multiparameter spatiotemporal image data, containing much more relevant information than can be analyzed by human observers. Hence there is a rapidly growing need for automated methods for quantitative analysis of such data, not only to cope with the rising rate at which images are acquired, but also to reach a higher level of accuracy, objectivity, and reproducibility than traditional data analysis methods.

The goal of our research is to develop advanced image analysis methods to enable reliable quantification and characterization of cellular and molecular dynamic processes. To this end we explore solutions for a wide range of problems, including image restoration, enhancement, super-resolution, detection, segmentation, classification, registration, and tracking. In addition to creating new methods we are strong proponents of evaluating methods thoroughly and making them publicly available in the form of user-friendly software tools.

Past and current published research includes:

Grand Challenges for Objective Benchmarking of Bioimage Analysis Methods

Bioimage Analysis Challenges

Every year many papers are published that describe new methods for solving specific bioimage analysis problems, but few papers provide a fair and direct comparison with the state of the art. Developers often compare their new methods to their own reimplementation or tuning of alternative methods. Evaluation and benchmarking of methods can be done more objectively by organizing so-called grand challenges, in which developers can put their methods to the test using standardized data and performance metrics.

We have been and are actively involved in the organization of challenges in bioimage analysis, including a comparison of methods for particle detection and tracking in time-lapse fluorescence microscopy images, a benchmark of methods for cell segmentation and tracking in time-lapse images from various types of microscopy, and an evaluation of methods for large-scale 3D neuron reconstruction from optical microscopy images. Such community efforts reduce potential sources of bias and greatly help the field by revealing the strengths and weaknesses of current methods and establishing a "litmus test" for future developers aiming to improve on the state of the art.

Particle Detection and Tracking in Time-Lapse Fluorescence Microscopy

Particle Detection and Tracking

Quantitative analysis of dynamic processes in living cells typically requires the detection and tracking of hundreds to thousands of particles in time-lapse fluorescence microscopy images. This is a challenging problem, since light exposure is often economized during image acquisition to avoid photodamage to the cells and to minimize photobleaching of the fluorescent labels, as a result of which the signal-to-noise ratio in the images is usually very low.

We have surveyed methods and software tools for cell and particle tracking and have evaluated solutions for particle detection and for multiframe particle association. In addition we have developed new Bayesian estimation methods for particle tracking that have been used for studying a wide range of intracellular dynamic phenomena. Examples include the regulation of microtubule behavior, the interaction of microtubule-targeting agents, the docking and fusion of exocytotic carriers, genome maintenance, and focal adhesion dynamics. Part of our methods have also been evaluated in the international particle tracking challenge, in which they showed favorable performance compared to the state of the art.

Neuron Reconstruction and Quantification in Fluorescence Microscopy

Neuron Reconstruction and Quantification

Digital reconstruction of the morphology of neurons is an important step toward understanding the functionality of neuronal networks, how they may be affected by neurodegenerative diseases, and to quantitatively assess the effects of drugs and therapies. This requires converting the (often large and sparse) microscopic images of neurons into faithful representations of the essential topological and geometrical features of the axonal and dendritic arborizations, including point coordinates, local diameters, and connectivity between points.

We have reviewed methods and software tools for neuron reconstruction and analysis. In addition we have developed a new method for detection and characterization of junctions and terminations in fluorescence microscopy images of neurons to aid the reconstruction process. We are also developing new fully automated neuron reconstruction methods using probabilistic tracing algorithms and are actively involved in an international community effort to benchmark these and many other methods for neuron analysis.

Cell Segmentation and Tracking in Time-Lapse Fluorescence Microscopy

Cell Segmentation and Tracking

Being the fundamental building blocks of life, cells are the key actors in many biological processes. Cell proliferation, differentiation, and migration are essential for the conception, development, and maintenance of any living organism. These dynamic processes also play a crucial role in the onset and progression of many diseases. Their study often involves imaging and analysis of the (morpho)dynamic behavior of single cells or cells in tissue under normal and perturbed conditions. This requires segmentation and tracking of large numbers of cells in time-lapse microscopy images.

We have reviewed the state of the art in cell segmentation as well as in cell tracking. In addition we have developed a model-evolution based method for cell segmentation and tracking in time-lapse fluorescence microscopy images. Adapted versions of the method have been used for cell motion correction and subsequent cell phase identification based on intracellular foci pattern analysis. The method has also been used for cell lineage reconstruction during early development of Caenorhabditis elegans in order to systematically study the effects of inhibiting chromatin regulators.

Structural and Functional Cardiovascular Analysis in Magnetic Resonance Imaging

MRI Image Analysis

Magnetic resonance imaging (MRI) is a noninvasive technique that is widely used in clinical practice to image the structure and function of organs in health and disease through specialized radiofrequency pulse sequences. A variety of sequences can be used to generate images of the heart and the blood vessels (angiography) using flow effects or other contrast (inherent or administered). And a technique known as tagged MRI offers great potential for quantitative analysis of a variety of functional parameters of heart dynamics.

We have developed a Bayesian estimation based method for cardiac motion tracking and analysis in tagged MRI image sequences that naturally combines information about the heart dynamics, the imaging process, and tag appearance, and performs favorably compared to state-of-the-art alternative methods used in the field. In addition we have developed methods for lumen segmentation and bifurcation detection in MRI images of the carotid arteries based on sparse representation classification approaches that leverage the redundancy of dictionary based representations of the image information.

Super-Resolution Reconstruction and Analysis in Magnetic Resonance Imaging

Super-Resolution Reconstruction

Any physical imaging system naturally has its limitations in terms of maximum achievable resolution, sensitivity, contrast, and signal-to-noise level. Especially in molecular imaging, where the phenomena of interest occur at levels beyond the reach of (pre)clinical scanners, this poses a problem. However, multiple scans, taken at different time points, from different viewpoints, or using different imaging parameters, often contain additional information that may be exploited by computational algorithms to build a single image with improved properties compared to any of the individual scans.

We have investigated to what extent super-resolution reconstruction (SRR) methods can improve the trade-off between resolution, signal-to-noise ratio, and acquisition time in magnetic resonance imaging (MRI). As a proof of concept we have shown successful application of SRR for visualization and analysis of activity in the mouse brain with specific MRI protocols. We have also developed an efficient computational framework capable of generating super-resolved images of interactively indicated regions in whole-body MRI mouse data by integrating state-of-the-art image processing techniques from the areas of articulated atlas-based segmentation, planar reformation, and SRR.

Atherosclerotic Carotid Plaque Quantification in Computed Tomography Angiography

Atherosclerotic Plaque Quantification

Computed tomography angiography (CTA) is an accurate imaging modality for assessing the presence of carotid atherosclerotic plaque and the severity of stenosis. Both phenomena are known to be risk factors for the occurrence of transient ischemic attacks and strokes. However, atherosclerotic carotid plaque and its components (calcifications, fibrous tissue, and lipid core) could be better predictors of such acute events than the widely used degree of stenosis, and may be more useful in the selection of patients who could benefit from therapeutic intervention.

We have developed a software tool to facilitate the measurement of atherosclerotic carotid plaque component volumes in multidetector CTA images. As a proof of concept we have shown that the tool enables characterization and quantification of plaque burden, calcifications, and fibrous tissue in good correlation with histology. Using the tool we have also assessed the association between intracranial internal carotid artery calcifications and cardiovascular risk factors in patients with ischemic cerebrovascular disease.

Automated Vessel Lumen Tracing and Segmentation in Cardiovascular Imaging

Vessel Centerline Extraction

Cardiovascular diseases and associated complications are among the major causes of death in the western world. Clinical procedures for diagnosing and treating cardiovascular patients call for accurate vessel analysis, most notably in stenosis grading, preoperative planning, and disease progression monitoring. A first step toward complete vessel analysis is often the detection of elongated image structures and localizing their centerlines.

We have developed a new method for vessel center lumen tracing in cardiovascular image data from digital subtraction angiography (DSA), computed tomography angiography (CTA), and magnetic resonance angiography (MRA). The method is based on multiscale image processing using both first-order (edge) and second-order (ridge) filters and combining the extracted information in a cost function that is globally minimized in searching for the optimal vessel centerline. We have also extended the method to perform vessel lumen segmentation in carotid CTA data by using the lumen centerline estimate to initialize a level-set algorithm for finding the lumen boundaries based on both image information and smoothness constraints.

Semiautomatic Neurite Tracing and Analysis in Fluorescence Microscopy Images

Interactive Neurite Tracing

The investigation of the molecular mechanisms involved in neurite outgrowth and differentiation in health and disease requires accurate and reproducible segmentation and quantification of neuronal processes. Despite the advent of advanced confocal microscope systems capable of imaging neurons in 3D at high resolution, many experiments in neurobiology are performed using 2D fluorescence microscopy, as they are based on in-vitro cell cultures of neurons, which extend mostly in two dimensions.

To facilitate the quantification task we have developed a semiautomatic method for neurite tracing which, similar to the vessel lumen tracing method described above, uses advanced image filtering and global cost function minimization algorithms to detect elongated image structures and determine their centerlines. We have also made a user-friendly software implementation of the method that is widely used in neurobiology. More recently we have written a review of methods and software tools for neuron analysis.

Convolution-Based Interpolation Methods for Medical Image Transformation

Medical Image Interpolation

Interpolation of digital data is required whenever the data needs to be resampled for further processing or analysis. In this age of ever-increasing digitization in the storage, processing, analysis, and communication of information, the need for accurate interpolation occurs in a wide variety of applications. Examples in the field of medical imaging include reconstruction of images from multiple projections, registration or alignment of images, and ray tracing of volumetric image data for visualization.

We have reviewed the developments in interpolation theory from the earliest times to the present age. In the process we have discovered a link between classical osculatory interpolation and modern convolution-based interpolation and based on this we have discussed examples of cubic interpolation schemes not previously studied in signal and image processing. We have also proposed a new class of piecewise polynomial interpolation kernels of any order that can be derived in the same fashion as the traditional cubic interpolation kernel. Finally we have thoroughly evaluated the performance of many convolution-based interpolation methods for medical image transformation and found that spline interpolation generally yields the best performance in terms of the trade-off between computational cost and accuracy.

Enhanced Vessel Visualization and Quantification in 3D Rotational Angiography

Image Enhancement in 3DRA

Three-dimensional rotational angiography (3DRA) allows isotropic high-resolution 3D imaging of blood vessels for accurate visualization and quantification of vascular anomalies. It uses a C-arm imaging system to acquire a sequence of about 100 low-dose 2D images of a patient during a 180-degree rotation of the X-ray source-detector combination following a single injection of contrast material and followed by computational 3D reconstruction. Due to the relatively high noise levels and artifactual background variations caused by surrounding tissue, image enhancement is highly desirable.

Using anthropomorphic vascular phantoms we have evaluated the effect of various linear and nonlinear image smoothing methods on the visualization and quantification of carotid stenoses and intracranial aneurysms in 3DRA. We have found that edge-enhancing anisotropic diffusion filtering is most suitable, in the sense that it does not increase the user-dependency of visualizations and quantifications as much as linear filtering methods, and it reduces noise near edges better than isotropic nonlinear diffusion, but at the cost of increased computation time and memory usage.

Patient Motion Registration and Correction in Digital Subtraction Angiography

Image Registration in DSA

Digital subtraction angiography (DSA) is a widely used imaging modality for visualizing blood vessels in the human body for diagnosis of stenoses and aneurysms as well as for visual catheter guidance during interventional procedures. Due to patient motion during image acquisition, the images taken at different time points are often misaligned, causing artifacts in the subtraction images that may hamper image interpretation. Thus image processing methods that can correct motion artifacts are very much needed.

We have reviewed retrospective motion correction methods for DSA summarizing two decades of research on the subject. In addition we have developed a fully automated image registration method to reduce motion artifacts that is computationally fast enough to be acceptable for clinical application. It uses an edge-based selection of control points, whose optimal displacements are computed by means of entropy-based template matching, from which the final warping of the images is computed in real-time by graphics hardware. The method has been clinically validated on cerebral DSA images by comparison with manual motion correction by expert radiologists.

Copyright © 1996 - 2017 Erik Meijering