Date of Degree
PhD (Doctor of Philosophy)
Smith, Brian J.
First Committee Member
Oleson, Jacob J.
Second Committee Member
Cavanaugh, Joseph E.
Third Committee Member
Beichel, Reinhard R.
Fourth Committee Member
Cowles, Kathryn M.
With the advancement of technology, medical imaging has become a fast growing area of research. Some imaging questions require little physician analysis, such as diagnosing a broken bone, using a 2-D X-ray image. More complicated questions, using 3-D scans, such as computerized tomography (CT), can be much more difficult to answer. For example, estimating tumor growth to evaluate malignancy; which informs whether intervention is necessary. This requires careful delineation of different structures in the image. For example, what is the tumor versus what is normal tissue; this is referred to as segmentation. Currently, the gold standard of segmentation is for a radiologist to manually trace structure edges in the 3-D image, however, this can be extremely time consuming. Additionally, manual segmentation results can differ drastically between and even within radiologists. A more reproducible, less variable, and more time efficient segmentation approach would drastically improve medical treatment. This potential, as well as the continued increase in computing power, has led to computationally intensive semiautomated segmentation algorithms. Segmentation algorithms' widespread use is limited due to difficulty in validating their performance. Fusion models, such as STAPLE, have been proposed as a way to combine multiple segmentations into a consensus ground truth; this allows for evaluation of both manual and semiautomated segmentation in relation to the consensus ground truth. Once a consensus ground truth is obtained, a multitude of approaches have been proposed for evaluating different aspects of segmentation performance; segmentation accuracy, between- and within -reader variability.
The focus of this dissertation is threefold. First, a simulation based tool is introduced to allow for the validation of fusion models. The simulation properties closely follow a real dataset, in order to ensure that they mimic reality. Second, a statistical hierarchical Bayesian fusion model is proposed, in order to estimate a consensus ground truth within a robust statistical framework. The model is validated using the simulation tool and compared to other fusion models, including STAPLE. Additionally, the model is applied to real datasets and the consensus ground truth estimates are compared across different fusion models. Third, a statistical hierarchical Bayesian performance model is proposed in order to estimate segmentation method specific accuracy, between- and within -reader variability. An extensive simulation study is performed to validate the model’s parameter estimation and coverage properties. Additionally, the model is fit to a real data source and performance estimates are summarized.
Fusion, Ground Truth, Imaging, Segmentation, Validation, Variability
xiii, 214 pages
Includes bibliographical references (pages 189-214).
Copyright © 2017 Andrew Emile Ghattas
Ghattas, Andrew Emile. "Medical imaging segmentation assessment via Bayesian approaches to fusion, accuracy and variability estimation with application to head and neck cancer." PhD (Doctor of Philosophy) thesis, University of Iowa, 2017.