Date of Degree
PhD (Doctor of Philosophy)
Electrical and Computer Engineering
Abràmoff, Michael D
Garvin, Mona K
First Committee Member
Abramoff, Michael D
Second Committee Member
Garvin, Mona K
Third Committee Member
Reinhardt, Joseph M
Fourth Committee Member
Scheetz, Todd E
Fifth Committee Member
The retinal vasculature analysis plays an important role in the diagnosis of ophthalmological diseases, as well as general human disorders that manifest on the retina. The fundus photograph is a 2-D color image modality of the retina and is widely used in modern ophthalmology clinics due to its relatively low cost and its non-invasive access to the retina. However, due to the complexity of the retinal vasculature presented on the image and the large variation of the image quality, no automated method is able to re-construct the retinal vasculature (i.e. construct arteriovenous trees) satisfactorily, thus preventing its analysis on large-scale clinical datasets.
In this thesis, we present a systematic and complete study to automatically construct the retinal vasculature on fundus photographs and apply it to a clinical dataset. First of all, a preliminary study is conducted to detect and classify important landmarks in the retinal vasculature using a machine learning method. The evaluation of this method reveals the difficulty of identifying each landmark as an independent target. Then a novel and more global method is proposed to construct retinal arteriovenous trees (A/V trees). The strategy of the proposed method is to build an over-connected vessel network, and separate it into vascular trees, then classify them into A/V trees. Particularly, by taking advantages of specific properties of the retinal vasculature, global and local information are combined together to recognize landmarks of the vasculature. Instead of recognizing each landmark independently as other methods do, this method considers the relationship between landmarks in a more global manner, thus recognizing them simultaneously and globally. With a special graph design, each landmark is associated with multiple possible configurations and costs, and a near optimal solution is selected by minimizing the costs of landmarks and the global property of the whole vascular network. With each landmark recognized, the A/V trees are easily inferred with a pixel classification method. By doing so, local noise in the images and local errors during pre-processing are corrected to some degree, and small vessels that are difficult to classify locally can also be recognized. The proposed method is compared with another method and the evaluation demonstrates its superiority.
To demonstrate its potential applicability, we apply the proposed method on a cohort study data of HIV-infected patients with treatment. New metrics to analyze retinal vessel width is developed based on the A/V trees built using the proposed method, and it is compared with a conventional metric. Statistical analysis reveals the advantages of the new metric and thus indicates the benefit of the proposed method and its potential application on large datasets.
The retinal vasculature is an important target in ophthalmology clinics because it provides insight to many ophthalmologic diseases, as well as general disorders that manifest on the retina. The fundus photograph is a 2-D color image modality of the retina and is widely used in modern ophthalmology clinics due to its relatively low cost and its non-invasive access to the retina. The analysis of the retinal vasculature is time-consuming and demands effort of human experts. Thus with limited human resources, the largely increasing clinical data collected from patients demands the automation of the analysis. However, due to the complexity of the vasculature presented on the image and the variation of the image quality, no automated method is able to re-construct arteriovenous trees satisfactorily, which is the most important step of the automatic analysis.
In this work, we conduct a systematic and complete study to automatically construct retinal arteriovenous trees (A/V trees) in fundus photographs. It requires the classification of vessels into arteries and veins, and also the detection of connections between vessels. First of all, a conventional method is conducted to automatically detect and classify an important type of structure, bifurcations in the vasculature. The result reveals the difficulty of constructing A/V trees by identifying each bifurcation independently. Then a novel and more global method is proposed. The strategy of the proposed method is to build an over-connected vessel network, separate it into vascular trees, and then classify them into A/V trees. Particularly, by taking advantages of properties of the retinal vasculature, global and local information of retinal vessels are combined together to recognize landmarks of the vasculature. Instead of recognizing each landmark independently as conventional methods do, we consider relation between landmarks, and recognize them simultaneously and globally. After landmarks are recognized, the types of trees can be easily inferred and A/V trees are constructed. A comparison is made between the proposed method and another method and the evaluation shows its merits.
In addition, to demonstrate its applicability, we use it to analyze the changes of vessel width on HIV-infected patients under highly active antiretroviral therapy (HAART). With A/V trees constructed using the proposed method, vessel widths are analyzed using a novel method. The vessel widths calculated using the new method are compared with vessel widths calculated using a conventional method. The comparison reveals the advantages of this new method, thus indicating the potential applicability of the proposed A/V tree construction method.
publicabstract, Fundus Image, graph, optimization, retinal vasculature
xix, 170 pages
Includes bibliographical references (pages 155-170).
Copyright 2016 Qiao Hu
Hu, Qiao. "Automatic construction of arterial and venous vascular trees in fundus images." PhD (Doctor of Philosophy) thesis, University of Iowa, 2016.