Home / Uncategorized / Can AI Detect Diabetic Retinopathy More Accurately?

Can AI Detect Diabetic Retinopathy More Accurately?

Jan 26, 2019
 
Editor: Joy Pape, MSN, FNP-C, CDE, WOCN, CFCN, FAADE

Author: Annahita Forghan, Pharm.D. Candidate 2019, LECOM College of Pharmacy

Automatic information processing studied to help find diabetic retinopathy on a larger scale.

Lately, Convolutional Neural Networks (CNNs) have been used to automatically detect exudates in the eye’s retina to help diagnose diabetes if they identify diabetic retinopathy (DR). But CNNs’ performance is not accurate. If a person has DR and it is not caught in time, it can lead to vision impairment and blindness. These conditions can be reduced in half if the DR is identified early. Fundus imaging is used to find exudates, but manual detection takes a long time. If this process could be done automatically and accurately, more of the population could be screened and at a distance too, since technology would be used.1

CNN models had limitations, such as large volumes of data. But it was an automatic Deep Learning (DL) technique that was compared in this study with other classifiers’ [pre-trained Residual Networks (ResNet-50) and Discriminative Restricted Boltzmann Machines (DRBM)] performances to see how well they could detect diabetic retinopathy.1

CNN did feature extraction and classification. It had neurons in its layers that corresponded to targets. The pre-trained ResNet-50 did not need a large amount of data for training. The ResNet-50’s softmax layer was replaced in this study by three different supervised classifiers: Support Vector Machine (SVM), Optimum-Path Forest (OPF), and k-Nearest Neighbors (KNN). The Discriminative Restricted Boltzmann Machines were unsupervised computer models that demonstrated bipartite graphs. Its layers are connected to each other, which lessens error amongst the data. An input layer was added to this, which labels each input sample using one-hot encoding.1

In this study, two databases were used: DIARETDB1 and e-Ophtha. Patches of size 25×25 were used with the color channels Red, Green, Blue (RGB), which were labeled into the groups Exudate and Non-exudate. The non-exudate patches consisted of vessels, background tissues, and optic nerve heads. And a “10-fold cross-validation with ten runs technique” was used to analyze each method. The performance of these different methods were analyzed based on overall accuracy (ACC), sensitivity (SN), and specificity (SP): “For DIARETDB1 database, Resnet-50 + SVM achieved the best sensitivity and accuracy of 0.99 and 98.2%, respectively. Resnet-50 + OPF obtained the highest specificity (0.99) compared to Resnet-50 + SVM, Resnet-50 + KNN and CNN model with the specificities of 0.96, 0.95 and 0.91, respectively. The Resnet-50 + SVM model also performed the best for e-Ophtha database.” This resulted in both databases performing at similar levels.1

It was also shown that “ResNet-50 with Support Vector Machines outperformed other networks….” The best results came from Resnet-50 + SVM (significantly better sensitivity and specificity values, “0.99 and 0.96 respectively”), which demonstrates the better use of ResNet-50 for evaluating the fundus images to detect RD exudates for a larger population.1

The other methods’ poor performances may have been skewed due to aspects such as data preparation and the use of color channels for visual representation. Further studies may look into these limitations.1

Practice Pearls:

  • Pre-trained, deep-learning methods may be able to better detect exudates automatically in DR. This study set out to find more accurate techniques to diagnose larger populations in order to prevent more complications of visual impairment.
  • Resnet-50 has the advantage of not requiring a large training dataset. Resnet-50 + SVM outperformed other methods with significantly better sensitivity and specificity.
  • Resnet-50 + SVM is able to better automatically detect exudate in retinal images with faster and more accurate estimations. Therefore, this technique can contribute to greater and more timely diagnoses of DR and prevention of its complications.

References:

Aliahmad B, Carvalho T, Khojasteh P, Kumar DK, Papa JP, Passos Júnior LA, Rezende E. “Exudate detection in fundus images using deeply-learnable features.” Computers in Biology and Medicine. 2018. https://www.ncbi.nlm.nih.gov/pubmed/30439600. 14 January 2019.

Annahita Forghan, Pharm.D. Candidate 2019, LECOM College of Pharmacy