Nav: Home

Study examines use of deep machine learning for detection of diabetic retinopathy

November 29, 2016

In an evaluation of retinal photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy, according to a study published online by JAMA.

Among individuals with diabetes, the prevalence of diabetic retinopathy is approximately 29 percent in the United States. Most guidelines recommend annual screening for those with no retinopathy or mild diabetic retinopathy and repeat examination in 6 months for moderate diabetic retinopathy. Retinal photography with manual interpretation is a widely accepted screening tool for diabetic retinopathy.

Automated grading of diabetic retinopathy has potential benefits such as increasing efficiency and coverage of screening programs; reducing barriers to access; and improving patient outcomes by providing early detection and treatment. To maximize the clinical utility of automated grading, an algorithm to detect referable diabetic retinopathy is needed. Machine learning (a discipline within computer science that focuses on teaching machines to detect patterns in data) has been leveraged for a variety of classification tasks including automated classification of diabetic retinopathy. However, much of the work has focused on "feature-engineering," which involves computing explicit features specified by experts, resulting in algorithms designed to detect specific lesions or predicting the presence of any level of diabetic retinopathy. Deep learning is a machine learning technique that avoids such engineering and allows an algorithm to program itself by learning the most predictive features directly from the images given a large data set of labeled examples, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation.

In this study, Lily Peng, M.D., Ph.D., of Google Inc., Mountain View, Calif., and colleagues applied deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus (the interior lining of the eyeball, including the retina, optic disc, and the macula) photographs. A specific type of network optimized for image classification was trained using a data set of 128,175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 U.S. licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated using 2 separate data sets (EyePACS-1, Messidor-2), both graded by at least 7 U.S. board-certified ophthalmologists.

The EyePACS-1 data set consisted of 9,963 images from 4,997 patients (prevalence of referable diabetic retinopathy [RDR; defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both], 8 percent of fully gradable images; the Messidor-2 data set had 1,748 images from 874 patients (prevalence of RDR, 15 percent of fully gradable images). Use of the algorithm achieved high sensitivities (97.5 percent [EyePACS-1] and 96 percent [Messidor-2]) and specificities (93 percent and 94 percent, respectively) for detecting referable diabetic retinopathy.

"These results demonstrate that deep neural networks can be trained, using large data sets and without having to specify lesion-based features, to identify diabetic retinopathy or diabetic macular edema in retinal fundus images with high sensitivity and high specificity. This automated system for the detection of diabetic retinopathy offers several advantages, including consistency of interpretation (because a machine will make the same prediction on a specific image every time), high sensitivity and specificity, and near instantaneous reporting of results," the authors write.

"Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment."

-end-

(doi:10.1001/jama.2016.17216; the study is available pre-embargo at the For the Media website)

Editor's Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

The JAMA Network Journals

Related Learning Articles:

Learning with light: New system allows optical 'deep learning'
A team of researchers at MIT and elsewhere has come up with a new approach to complex computations, using light instead of electricity.
Mount Sinai study reveals how learning in the present shapes future learning
The prefrontal cortex shapes memory formation by modulating hippocampal encoding.
Better learning through zinc?
Zinc is a vital micronutrient involved in many cellular processes: For example, in learning and memory processes, it plays a role that is not yet understood.
Deep learning and stock trading
A study undertaken by researchers at the School of Business and Economics at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) has shown that computer programs that algorithms based on artificial intelligence are able to make profitable investment decisions.
Learning makes animals intelligent
The fact that animals can use tools, have self-control and certain expectations of life can be explained with the help of a new learning model for animal behavior.
Learning Morse code without trying
Researchers at the Georgia Institute of Technology have developed a system that teaches people Morse code within four hours using a series of vibrations felt near the ear.
The adolescent brain is adapted to learning
Teenagers are often portrayed as seeking immediate gratification, but new work suggests that their sensitivity to reward could be part of an evolutionary adaptation to learn from their environment.
The brain watched during language learning
Researchers from Nijmegen, the Netherlands, have for the first time captured images of the brain during the initial hours and days of learning a new language.
Learning in the absence of external feedback
Rewards act as external factors that influence and reinforce learning processes.
New learning procedure for neural networks
Neural networks learn to link temporally dispersed stimuli.

Best Science Podcasts 2017

We have hand picked the best science podcasts for 2017. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: Radiolab

Oliver Sipple
One morning, Oliver Sipple went out for a walk. A couple hours later, to his own surprise, he saved the life of the President of the United States. But in the days that followed, Sipple's split-second act of heroism turned into a rationale for making his personal life into political opportunity. What happens next makes us wonder what a moment, or a movement, or a whole society can demand of one person. And how much is too much?  Through newly unearthed archival tape, we hear Sipple himself grapple with some of the most vexing topics of his day and ours - privacy, identity, the freedom of the press - not to mention the bonds of family and friendship.  Reported by Latif Nasser and Tracie Hunte. Produced by Matt Kielty, Annie McEwen, Latif Nasser and Tracie Hunte. Special thanks to Jerry Pritikin, Michael Yamashita, Stan Smith, Duffy Jennings; Ann Dolan, Megan Filly and Ginale Harris at the Superior Court of San Francisco; Leah Gracik, Karyn Hunt, Jesse Hamlin, The San Francisco Bay Area Television Archive, Mike Amico, Jennifer Vanasco and Joey Plaster. Support Radiolab today at Radiolab.org/donate.
Now Playing: TED Radio Hour

Future Consequences
From data collection to gene editing to AI, what we once considered science fiction is now becoming reality. This hour, TED speakers explore the future consequences of our present actions. Guests include designer Anab Jain, futurist Juan Enriquez, biologist Paul Knoepfler, and neuroscientist and philosopher Sam Harris.