About

Introduction
Workshop
Framework
Categories
Data format
Submission format

Introduction

Efficiently obtaining a reliable coronary artery centerline from computed tomography angiography data is relevant in clinical practice. Whereas numerous methods have been presented for this purpose, up to now no standardized evaluation methodology has been published to reliably evaluate and compare the performance of the existing or newly developed coronary artery centerline extraction algorithms.

This evaluation framework provides a large-scale standardized evaluation methodology and reference database for the quantitative evaluation of coronary artery centerline extraction algorithms.

Well-defined measures are presented for the evaluation of coronary artery centerline extraction algorithms, a database containing thirty-two cardiac CTA datasets with corresponding reference standard is described and made available, and different methods are available to extract statistics from the evaluation results. Currently 24 coronary artery centerline extraction methods are evaluated with the framework.

Still open for submission

This study started with the MICCAI 2008 workshop "3D Segmentation in the Clinic: A Grand Challenge II" at the 11th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in September 2008. After the workshop we have kept en will keep the website open for submissions. The thirty-two cardiac CTA datasets, and the corresponding reference standard centerlines for the training data, are available for download for anyone who wishes to validate their algorithm. Extracted centerlines can be submitted and the obtained results can be used in a publication.

Evaluation framework

Details about the evaluation framework (data, reference standard, measures, scores and ranking) can be found in M. Schaap, C. Metz, et al., Standardized Evaluation Methodology and Reference Database for Evaluating Coronary Artery Centerline Extraction Algorithms, Medical Image Analysis, 2009.

Challenge categories

We discern three different categories of coronary artery centerline extraction algorithms: automatic extraction methods, methods with minimal user interaction and interactive extraction methods.

Category 1: automatic extraction

Automatic extraction methods find the centerlines of coronary arteries without user interaction. In order to evaluate the performance of automatic coronary artery centerline extraction, two points per vessel are provided to extract the coronary artery of interest:

  • Point A: a point inside the distal part of the vessel; this point unambiguously defines the vessel to be tracked;
  • Point B: a point approximately 3 cm (measured along the centerline) distal of the start point of the centerline.

Point A should be used for selecting the appropriate centerline. If the automatic extraction result does not contain centerlines near point A, point B can be used to select the appropriate centerline. Point A and B are only meant for selecting the right centerline and it is not allowed to use them as input for the extraction algorithm.

Category 2: extraction with minimal user interaction

Extraction methods with minimal user interaction are allowed to use one point per vessel as input for the algorithm. This can be either one of the following points:

  • Point A or B, as defined above;
  • Point S: the start point of the centerline;
  • Point E: the end point of the centerline;
  • Point U: any manually defined point.

Points A, B, S and E are provided with the data. Furthermore, in case the method obtains a vessel tree from the initial point, point A or B may be used after the centerline determination to select the appropriate centerline.

Category 3: interactive extraction

All methods that require more user-interaction than one point per vessel as input are part of category 3. Methods can use e.g. both points S and E from category 2, a series of manually clicked positions, or one point and a user-defined threshold.

Data format

Directory structure

The training data and testing data are stored in archives with directories for each dataset. The directories uniquely describe the datasets. The training datasets are numbered 00 to 07 and are stored in the directories dataset00 to dataset07. The testing set is stored in the directories dataset08 to dataset31. Each directory datasetXX contains an image file, named imageXX.mhd and imageXX.raw, and four directories for the vessels, these are named vessel0, vessel1, vessel2, and vessel3. These directories contain the reference standard and point A, B, S and E for each vessel.

Image data format

All image data is stored in Meta format containing an ASCII readable header and a separate raw image data file. This format is ITK compatible. Full documentation is available at http://www.itk.org/Wiki/MetaIO. An application that can read the data is SNAP (http://www.itksnap.org/). If you want to write your own code to read the data, note that in the header file you can find the dimensions of each file. In the raw file the values for each voxel are stored consecutively with index running first over x, then y, then z. The pixel type is unsigned short. A gray value (GV) of 0 corresponds to -1024 Hounsfield units (HU) and 1024GV corresponds to 0HU (i.e. HU(x) = GV(x) - 1024).

Reference standard files

The files named reference.txt contain the reference standard paths for each vessel. The files contain the world position of the x-, y-, and z-coordinate of each path point, the radius at that point (ri) and the inter-observer variability of that position (ioi), in case of the averaged reference standard. Every point is on a different line in the file starting with the most proximal point and ending with the most distal point of the vessel. The voxel coordinate of each point can be calculated by dividing the world coordinate by the voxel size of the image. The voxel size can be found in the ElementSpacing line of the .mhd file. The coordinates (0, 0, 0) correspond to the border of the first voxel and not to its center. The center of the first voxel therefore has the coordinates (ElementSpacing[0]/2, ElementSpacing[1]/2, ElementSpacing[2]/2). A typical reference.txt file looks like this:

x0 y0 z0 r0 io0
x1 y1 z1 r1 io1
x2 y2 z2 r2 io2
x... y... z... r... io...
xn yn zn rn ion
with n the number of points of the path.

Point files

The files pointA.txt, pointB.txt, pointS.txt, and pointE.txt contain respectively the A, B, S and E point for each vessel. These files contain three values, corresponding with the x-,y- and z-coordinate of the respective point.

Submission format

A participant should create an archive (.rar, .tar.gz, .tar or .zip files are supported) similar to the directory structure of the training and testing data. It should contain a directory for each dataset. These directories should be named dataset00 to dataset07if one is submitting results on the training data and dataset00 to dataset31 if one is submitting results on the testing data. Note that testing submissions should include all datasets (including the training data). The directories should contain 4 subdirectories, named vessel0 to vessel3, with a file called result.txt. This file should contain the extracted centerline. It should contain one point per line, ordered from proximal to distal. Each point should be described by three values corresponding to the x-,y- and z-coordinate of each point. These points should be in world-coordinates; similar to the input point and reference files.

An example testing submission archive:

\dataset00\vessel0\result.txt
\dataset00\vessel1\result.txt
\dataset00\vessel2\result.txt
\dataset00\vessel3\result.txt
\dataset01\vessel0\result.txt
\dataset01\vessel1\result.txt
\dataset01\vessel2\result.txt
\dataset01\vessel3\result.txt
...
\dataset31\vessel0\result.txt
\dataset31\vessel1\result.txt
\dataset31\vessel2\result.txt
\dataset31\vessel3\result.txt