Peer Reviewed

1

DOI

10.17077/omia.1055

Conference Location

Athens, Greece

Publication Date

October 2016

Abstract

Diabetic Macular Edema (DME) is a major cause of vision loss in diabetes. Its early detection and treatment is therefore a vital task in management of diabetic retinopathy. In this paper, we propose a new featurelearning approach for grading the severity of DME using color retinal fundus images. An automated DME diagnosis system based on the proposed featurelearning approach is developed to help early diagnosis of the disease and thus averts (or delays) its progression. It utilizes the convolutional neural networks (CNNs) to identify and extract features of DME automatically without any kind of user intervention. The developed prototype was trained and assessed by using an existing MESSIDOR dataset of 1200 images. The obtained preliminary results showed accuracy of (88.8 %), sensitivity (74.7%) and specificity (96.5 %). These results compare favorably to state-of-the-art findings with the added benefit of an automatic feature-learning approach rather than a time-consuming handcrafted approach.

Rights

Copyright © 2016 the authors

Included in

Ophthalmology Commons

Share

COinS
 
Oct 21st, 12:00 AM Oct 21st, 12:00 AM

Diabetic Macular Edema Grading Based on Deep Neural Networks

Athens, Greece

Diabetic Macular Edema (DME) is a major cause of vision loss in diabetes. Its early detection and treatment is therefore a vital task in management of diabetic retinopathy. In this paper, we propose a new featurelearning approach for grading the severity of DME using color retinal fundus images. An automated DME diagnosis system based on the proposed featurelearning approach is developed to help early diagnosis of the disease and thus averts (or delays) its progression. It utilizes the convolutional neural networks (CNNs) to identify and extract features of DME automatically without any kind of user intervention. The developed prototype was trained and assessed by using an existing MESSIDOR dataset of 1200 images. The obtained preliminary results showed accuracy of (88.8 %), sensitivity (74.7%) and specificity (96.5 %). These results compare favorably to state-of-the-art findings with the added benefit of an automatic feature-learning approach rather than a time-consuming handcrafted approach.