Nav: Home

Study examines use of deep machine learning for detection of diabetic retinopathy

November 29, 2016

In an evaluation of retinal photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy, according to a study published online by JAMA.

Among individuals with diabetes, the prevalence of diabetic retinopathy is approximately 29 percent in the United States. Most guidelines recommend annual screening for those with no retinopathy or mild diabetic retinopathy and repeat examination in 6 months for moderate diabetic retinopathy. Retinal photography with manual interpretation is a widely accepted screening tool for diabetic retinopathy.

Automated grading of diabetic retinopathy has potential benefits such as increasing efficiency and coverage of screening programs; reducing barriers to access; and improving patient outcomes by providing early detection and treatment. To maximize the clinical utility of automated grading, an algorithm to detect referable diabetic retinopathy is needed. Machine learning (a discipline within computer science that focuses on teaching machines to detect patterns in data) has been leveraged for a variety of classification tasks including automated classification of diabetic retinopathy. However, much of the work has focused on "feature-engineering," which involves computing explicit features specified by experts, resulting in algorithms designed to detect specific lesions or predicting the presence of any level of diabetic retinopathy. Deep learning is a machine learning technique that avoids such engineering and allows an algorithm to program itself by learning the most predictive features directly from the images given a large data set of labeled examples, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation.

In this study, Lily Peng, M.D., Ph.D., of Google Inc., Mountain View, Calif., and colleagues applied deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus (the interior lining of the eyeball, including the retina, optic disc, and the macula) photographs. A specific type of network optimized for image classification was trained using a data set of 128,175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 U.S. licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated using 2 separate data sets (EyePACS-1, Messidor-2), both graded by at least 7 U.S. board-certified ophthalmologists.

The EyePACS-1 data set consisted of 9,963 images from 4,997 patients (prevalence of referable diabetic retinopathy [RDR; defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both], 8 percent of fully gradable images; the Messidor-2 data set had 1,748 images from 874 patients (prevalence of RDR, 15 percent of fully gradable images). Use of the algorithm achieved high sensitivities (97.5 percent [EyePACS-1] and 96 percent [Messidor-2]) and specificities (93 percent and 94 percent, respectively) for detecting referable diabetic retinopathy.

"These results demonstrate that deep neural networks can be trained, using large data sets and without having to specify lesion-based features, to identify diabetic retinopathy or diabetic macular edema in retinal fundus images with high sensitivity and high specificity. This automated system for the detection of diabetic retinopathy offers several advantages, including consistency of interpretation (because a machine will make the same prediction on a specific image every time), high sensitivity and specificity, and near instantaneous reporting of results," the authors write.

"Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment."
-end-
(doi:10.1001/jama.2016.17216; the study is available pre-embargo at the For the Media website)

Editor's Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

The JAMA Network Journals

Related Learning Articles:

Learning with light: New system allows optical 'deep learning'
A team of researchers at MIT and elsewhere has come up with a new approach to complex computations, using light instead of electricity.
Mount Sinai study reveals how learning in the present shapes future learning
The prefrontal cortex shapes memory formation by modulating hippocampal encoding.
Better learning through zinc?
Zinc is a vital micronutrient involved in many cellular processes: For example, in learning and memory processes, it plays a role that is not yet understood.
Deep learning and stock trading
A study undertaken by researchers at the School of Business and Economics at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) has shown that computer programs that algorithms based on artificial intelligence are able to make profitable investment decisions.
Learning makes animals intelligent
The fact that animals can use tools, have self-control and certain expectations of life can be explained with the help of a new learning model for animal behavior.
Learning Morse code without trying
Researchers at the Georgia Institute of Technology have developed a system that teaches people Morse code within four hours using a series of vibrations felt near the ear.
The adolescent brain is adapted to learning
Teenagers are often portrayed as seeking immediate gratification, but new work suggests that their sensitivity to reward could be part of an evolutionary adaptation to learn from their environment.
The brain watched during language learning
Researchers from Nijmegen, the Netherlands, have for the first time captured images of the brain during the initial hours and days of learning a new language.
Learning in the absence of external feedback
Rewards act as external factors that influence and reinforce learning processes.
New learning procedure for neural networks
Neural networks learn to link temporally dispersed stimuli.

Related Learning Reading:

Make It Stick: The Science of Successful Learning
by Peter C. Brown (Author), Henry L. Roediger III (Author), Mark A. McDaniel (Author)

To most of us, learning something "the hard way" implies wasted time and effort. Good teaching, we believe, should be creatively tailored to the different learning styles of students and should use strategies that make learning easier. Make It Stick turns fashionable ideas like these on their head. Drawing on recent discoveries in cognitive psychology and other disciplines, the authors offer concrete techniques for becoming more productive learners.

Memory plays a central role in our ability to carry out complex cognitive tasks, such as applying knowledge to problems never... View Details


The Art of Learning: An Inner Journey to Optimal Performance
by Josh Waitzkin (Author)

In his riveting new book, The Art of Learning, Waitzkin tells his remarkable story of personal achievement and shares the principles of learning and performance that have propelled him to the top—twice.

Josh Waitzkin knows what it means to be at the top of his game. A public figure since winning his first National Chess Championship at the age of nine, Waitzkin was catapulted into a media whirlwind as a teenager when his father’s book Searching for Bobby Fischer was made into a major motion picture. After dominating the scholastic chess world for ten years, Waitzkin... View Details


How Learning Works: Seven Research-Based Principles for Smart Teaching
by Susan A. Ambrose (Author), Michael W. Bridges (Author), Michele DiPietro (Author), Marsha C. Lovett (Author), Marie K. Norman (Author), Richard E. Mayer (Foreword)

Praise for How Learning Works

"How Learning Works is the perfect title for this excellent book. Drawing upon new research in psychology, education, and cognitive science, the authors have demystified a complex topic into clear explanations of seven powerful learning principles. Full of great ideas and practical suggestions, all based on solid research evidence, this book is essential reading for instructors at all levels who wish to improve their students' learning."
Barbara Gross Davis, assistant vice chancellor for educational development, University... View Details


Deep Learning (Adaptive Computation and Machine Learning series)
by Ian Goodfellow (Author), Yoshua Bengio (Author), Aaron Courville (Author)

An introduction to a broad range of topics in deep learning, covering mathematical and conceptual background, deep learning techniques used in industry, and research perspectives.

"Written by three experts in the field, Deep Learning is the only comprehensive book on the subject."
-- Elon Musk, cochair of OpenAI; cofounder and CEO of Tesla and SpaceX

Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from... View Details


An Introduction to Statistical Learning: with Applications in R (Springer Texts in Statistics)
by Gareth James (Author), Daniela Witten (Author), Trevor Hastie (Author), Robert Tibshirani (Author)

An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, and more. Color graphics... View Details


Visible Learning for Teachers: Maximizing Impact on Learning
by John Hattie (Author)

In November 2008, John Hattie’s ground-breaking book Visible Learning synthesised the results of more than fifteen years research involving millions of students and represented the biggest ever collection of evidence-based research into what actually works in schools to improve learning.

Visible Learning for Teachers takes the next step and brings those ground breaking concepts to a completely new audience. Written for students, pre-service and in-service teachers, it explains how to apply the principles of Visible Learning to any classroom... View Details


Visible Learning for Literacy, Grades K-12: Implementing the Practices That Work Best to Accelerate Student Learning (Corwin Literacy)
by Douglas Fisher (Author), Nancy Frey (Author), John Hattie (Author)

Ensure students demonstrate more than a year’s worth of learning during a school year

Renowned literacy experts Douglas Fisher and Nancy Frey work with John Hattie to apply his 15 years of research, identifying instructional routines that have the biggest impact on student learning, to literacy practices. These practices are "visible" because their purpose is clear, they are implemented at the right moment in a student’s learning, and their effect is tangible. 

Through dozens of classroom scenarios, learn how to use the right approach at the right time... View Details


The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics)
by Trevor Hastie (Author), Robert Tibshirani (Author), Jerome Friedman (Author)

This book describes the important ideas in a variety of fields such as medicine, biology, finance, and marketing in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of colour graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and... View Details


Bernie Kosar: Learning to Scramble
by Bernie Kosar (Author), Craig Stout (Author)

Any football fan knows that scrambling is the way a quarterback can gain more time and more opportunity. But anyone familiar with Bernie Kosar s football career knows he didn t beat you with his physical prowess. Possessing slow feet, an awkward throwing motion and unorthodox mechanics, the kid from tough, blue-collar Youngstown, Ohio did not look like a prototype NFL quarterback. How he beat you and he beat a lot of people over the course of his college and NFL career was with a savant-like level of football intelligence and an indomitable will to win. Chronicling his rise from the hard-up... View Details


Visible Learning for Mathematics, Grades K-12: What Works Best to Optimize Student Learning (Corwin Mathematics Series)
by John Hattie (Author), Douglas Fisher (Author), Nancy Frey (Author), Linda M. Gojak (Author), Sara Delano Moore (Author), William L. Mellman (Author)

Rich tasks, collaborative work, number talks, problem-based learning, direct instruction…with so many possible approaches, how do we know which ones work the best? In Visible Learning for Mathematics, six acclaimed educators assert it’s not about which one―it’s about when―and show you how to design high-impact instruction so all students demonstrate more than a year’s worth of mathematics learning for a year spent in school.

That’s a high bar, but with the amazing K-12 framework here, you choose the right approach at the right time, depending upon where... View Details

Best Science Podcasts 2017

We have hand picked the best science podcasts for 2017. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Simple Solutions
Sometimes, the best solutions to complex problems are simple. But simple doesn't always mean easy. This hour, TED speakers describe the innovation and hard work that goes into achieving simplicity. Guests include designer Mileha Soneji, chef Sam Kass, sleep researcher Wendy Troxel, public health advocate Myriam Sidibe, and engineer Amos Winter.
Now Playing: Science for the People

#448 Pavlov (Rebroadcast)
This week, we're learning about the life and work of a groundbreaking physiologist whose work on learning and instinct is familiar worldwide, and almost universally misunderstood. We'll spend the hour with Daniel Todes, Ph.D, Professor of History of Medicine at The Johns Hopkins University, discussing his book "Ivan Pavlov: A Russian Life in Science."