Nav: Home

Artificial intelligence becomes life-long learner with new framework

May 20, 2019

RESEARCH TRIANGLE PARK, N.C. (May 20, 2019) - A project of the U.S. Army has developed a new framework for deep neural networks that allows artificial intelligence systems to better learn new tasks while forgetting less of what they have learned regarding previous tasks.

The North Carolina State University researchers, funded by the Army, have also demonstrated that using the framework to learn a new task can make the AI better at performing previous tasks, a phenomenon called backward transfer.

"The Army needs to be prepared to fight anywhere in the world so its intelligent systems also need to be prepared," said Dr. Mary Anne Fields, program manager for Intelligent Systems at Army Research Office, an element of U.S. Army Combat Capabilities Development Command's Army Research Lab. "We expect the Army's intelligent systems to continually acquire new skills as they conduct missions on battlefields around the world without forgetting skills that have already been trained. For instance, while conducting an urban operation, a wheeled robot may learn new navigation parameters for dense urban cities, but it still needs to operate efficiently in a previously encountered environment like a forest."

The research team proposed a new framework, called Learn to Grow, for continual learning, which decouples network structure learning and model parameter learning. In experimental testing it outperformed pervious approaches to continual learning.

"Deep neural network AI systems are designed for learning narrow tasks," said Xilai Li, a co-lead author of the paper and a Ph.D. candidate at NC State. "As a result, one of several things can happen when learning new tasks, systems can forget old tasks when learning new ones, which is called catastrophic forgetting. Systems can forget some of the things they knew about old tasks, while not learning to do new ones as well. Or systems can fix old tasks in place while adding new tasks - which limits improvement and quickly leads to an AI system that is too large to operate efficiently. Continual learning, also called lifelong-learning or learning-to-learn, is trying to address the issue."

To understand the Learn to Grow framework, think of deep neural networks as a pipe filled with multiple layers. Raw data goes into the top of the pipe, and task outputs come out the bottom. Every "layer" in the pipe is a computation that manipulates the data in order to help the network accomplish its task, such as identifying objects in a digital image. There are multiple ways of arranging the layers in the pipe, which correspond to different "architectures" of the network.

When asking a deep neural network to learn a new task, the Learn to Grow framework begins by conducting something called an explicit neural architecture optimization via search. What this means is that as the network comes to each layer in its system, it can decide to do one of four things: skip the layer; use the layer in the same way that previous tasks used it; attach a lightweight adapter to the layer, which modifies it slightly; or create an entirely new layer.

This architecture optimization effectively lays out the best topology, or series of layers, needed to accomplish the new task. Once this is complete, the network uses the new topology to train itself on how to accomplish the task - just like any other deep learning AI system.

"We've run experiments using several datasets, and what we've found is that the more similar a new task is to previous tasks, the more overlap there is in terms of the existing layers that are kept to perform the new task," Li said. "What is more interesting is that, with the optimized - or "learned" topology - a network trained to perform new tasks forgets very little of what it needed to perform the older tasks, even if the older tasks were not similar."

The researchers also ran experiments comparing the Learn to Grow framework's ability to learn new tasks to several other continual learning methods, and found that the Learn to Grow framework had better accuracy when completing new tasks.

To test how much each network may have forgotten when learning the new task, the researchers then tested each system's accuracy at performing the older tasks - and the Learn to Grow framework again outperformed the other networks.

"In some cases, the Learn to Grow framework actually got better at performing the old tasks," said Caiming Xiong, the research director of Salesforce Research and a co-author of the work. "This is called backward transfer, and occurs when you find that learning a new task makes you better at an old task. We see this in people all the time; not so much with AI."

"This Army investment extends the current state of the art machine learning techniques that will guide our Army Research Laboratory researchers as they develop robotic applications, such as intelligent maneuver and learning to recognize novel objects," Fields said. "This research brings AI a step closer to providing our warfighters with effective unmanned systems that can be deployed in the field."
-end-
The paper, "Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting," will be presented at the 36th International Conference on Machine Learning, being held June 9-15 in Long Beach, California. Co-lead authors of the paper are Tianfu Wu, Ph.D., an assistant professor of electrical and computer engineering at NC State, Xilai Li, a doctoral student at NC State, and Yingbo Zhou of Salesforce Research. The paper was co-authored by Richard Socher and Caiming Xiong of Salesforce Research.

The work was also supported by the National Science Foundation. Part of the work was done while Li was a summer intern at Salesforce AI Research.

The CCDC Army Research Laboratory (ARL) is an element of the U.S. Army Combat Capabilities Development Command. As the Army's corporate research laboratory, ARL discovers, innovates and transitions science and technology to ensure dominant strategic land power. Through collaboration across the command's core technical competencies, CCDC leads in the discovery, development and delivery of the technology-based capabilities required to make Soldiers more effective to win our Nation's wars and come home safely. CCDC is a major subordinate command of the U.S. Army Futures Command.

U.S. Army Research Laboratory

Related Learning Articles:

Technology in higher education: learning with it instead of from it
Technology has shifted the way that professors teach students in higher education.
Learning is optimized when we fail 15% of the time
If you're always scoring 100%, you're probably not learning anything new.
School spending cuts triggered by great recession linked to sizable learning losses for learning losses for students in hardest hit areas
Substantial school spending cuts triggered by the Great Recession were associated with sizable losses in academic achievement for students living in counties most affected by the economic downturn, according to a new study published today in AERA Open, a peer-reviewed journal of the American Educational Research Association.
Lessons in learning
A new Harvard study shows that, though students felt like they learned more from traditional lectures, they actually learned more when taking part in active learning classrooms.
Learning to look
A team led by JGI scientists has overhauled the perception of inovirus diversity.
Sleep readies synapses for learning
Synapses in the hippocampus are larger and stronger after sleep deprivation, according to new research in mice published in JNeurosci.
Learning from experience is all in the timing
Animals learn the hard way which sights, sounds, and smells are relevant to survival.
Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.
When it comes to learning, what's better: The carrot or the stick?
Does the potential to win or lose money influence the confidence one has in one's own decisions?
Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.
More Learning News and Learning Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Teaching For Better Humans 2.0
More than test scores or good grades–what do kids need for the future? This hour, TED speakers explore how to help children grow into better humans, both during and after this time of crisis. Guests include educators Richard Culatta and Liz Kleinrock, psychologist Thomas Curran, and writer Jacqueline Woodson.
Now Playing: Science for the People

#556 The Power of Friendship
It's 2020 and times are tough. Maybe some of us are learning about social distancing the hard way. Maybe we just are all a little anxious. No matter what, we could probably use a friend. But what is a friend, exactly? And why do we need them so much? This week host Bethany Brookshire speaks with Lydia Denworth, author of the new book "Friendship: The Evolution, Biology, and Extraordinary Power of Life's Fundamental Bond". This episode is hosted by Bethany Brookshire, science writer from Science News.
Now Playing: Radiolab

Space
One of the most consistent questions we get at the show is from parents who want to know which episodes are kid-friendly and which aren't. So today, we're releasing a separate feed, Radiolab for Kids. To kick it off, we're rerunning an all-time favorite episode: Space. In the 60's, space exploration was an American obsession. This hour, we chart the path from romance to increasing cynicism. We begin with Ann Druyan, widow of Carl Sagan, with a story about the Voyager expedition, true love, and a golden record that travels through space. And astrophysicist Neil de Grasse Tyson explains the Coepernican Principle, and just how insignificant we are. Support Radiolab today at Radiolab.org/donate.