Nav: Home

Georgia Tech researchers demonstrate how the brain can handle so much data

December 15, 2015

Humans learn to very quickly identify complex objects and variations of them. We generally recognize an "A" no matter what the font, texture or background, for example, or the face of a coworker even if she puts on a hat or changes her hairstyle. We also can identify an object when just a portion is visible, such as the corner of a bed or the hinge of a door. But how? Are there simple techniques that humans use across diverse tasks? And can such techniques be computationally replicated to improve computer vision, machine learning or robotic performance?

Researchers at Georgia Tech discovered that humans can categorize data using less than 1 percent of the original information, and validated an algorithm to explain human learning -- a method that also can be used for machine learning, data analysis and computer vision.

"How do we make sense of so much data around us, of so many different types, so quickly and robustly?" said Santosh Vempala, Distinguished Professor of Computer Science at the Georgia Institute of Technology and one of four researchers on the project. "At a fundamental level, how do humans begin to do that? It's a computational problem."

Researchers Rosa Arriaga, Maya Cakmak, David Rutter, and Vempala at Georgia Tech's College of Computing studied human performance in "random projection" tests to understand how well humans learn an object. They presented test subjects with original, abstract images and then asked whether they could correctly identify that same image when randomly shown just a small portion of it.

"We hypothesized that random projection could be one way humans learn," Arriaga, a senior research scientist and developmental psychologist, explains. "The short story is, the prediction was right. Just 0.15 percent of the total data is enough for humans."

Next, researchers tested a computational algorithm to allow machines (very simple neural networks) to complete the same tests. Machines performed as well as humans, which provides a new understanding of how humans learn. "We found evidence that, in fact, the human and the neural network behave very similarly," Arriaga said.

The researchers wanted to come up with a mathematical definition of what typical and atypical stimuli look like and, from that, predict which data would hardest for the human and the machine to learn. Humans and machines performed equally, demonstrating that indeed one can predict which data will be hardest to learn over time.

Results were recently published in the journal Neural Computation (MIT press). It is believed to be the first study of "random projection," the core component of the researchers' theory, with human subjects.

To test their theory, researchers created three families of abstract images at 150 x 150 pixels, then very small ``random sketches" of those images. Test subjects were shown the whole image for 10 seconds, then randomly shown 16 sketches of each. Using abstract images ensured that neither humans nor machines had any prior knowledge of what the objects were.

"We were surprised by how close the performance was between extremely simple neural networks and humans," Vempala said. "The design of neural networks was inspired by how we think humans learn, but it's a weak inspiration. To find that it matches human performance is quite a surprise."

"This fascinating paper introduces a localized random projection that compresses images while still making it possible for humans and machines to distinguish broad categories," said Sanjoy Dasgupta, professor of computer science and engineering at the University of California San Diego and an expert on machine learning and random projection. "It is a creative combination of insights from geometry, neural computation, and machine learning."

Although researchers cannot definitively claim that the human brain actually engages in random projection, the results support the notion that random projection is a plausible explanation, the authors conclude. In addition, it suggests a very useful technique for machine learning: large data is a formidable challenge today, and random projection is one way to make data manageable without losing essential content, at least for basic tasks such as categorization and decision making.

The algorithmic theory of learning based on random projection already has been cited more than 300 times and has become a commonly used technique in machine learning to handle large data of diverse types.
The complete research paper, "Visual Categorization with Random Projection," can be found here and in the October edition of Neural Computation.

This work is partially funded by the National Science Foundation (CCF-0915903 and CCF-1217793). Any conclusions expressed are those of the principal investigator and may not necessarily represent the official views of the funding organizations.

Georgia Institute of Technology

Related Neural Networks Articles:

Method elucidates inner workings of neural networks
A new technique helps elucidate the inner workings of neural networks trained on visual data.
Stanford scientists study Pavlovian conditioning in neural networks
By looking at groups of neurons in the emotional center of the brain, researchers now understand how neural networks in the brain form associations, like those made famous by Ivan Pavlov.
Neural networks promise sharpest ever images
Telescopes, the workhorse instruments of astronomy, are limited by the size of the mirror or lens they use.
Artificial synapse for neural networks
A new organic artificial synapse made by Stanford researchers could support computers that better recreate the way the human brain processes information.
Secure networks for the Internet of the future
Two new projects at the University of Würzburg's Institute of Computer Science receive nearly €750,000 worth of funding.
Here's how deep learning neural networks are designed
World Scientific's latest book 'Deep Learning Neural Networks: Design and Case Studies' shows how DLNN can be a powerful computational tool for solving prediction, diagnosis, detection and decision problems based on a well-defined computational architecture.
Neural networks -- why larger brains are more susceptible to mental illnesses
In humans and other mammals, the cerebral cortex is responsible for sensory, motor, and cognitive functions.
Neural networks to obtain synthetic petroleum
The UPV/EHU's Catalytic Processes for Waste Valorisation research group is working on various lines of research relating to renewable energies, one of which corresponds to the obtaining of bio-oils or synthetic petroleum using biomass.
Women's connections in extreme networks
A team of researchers at the University of Miami who examined the role of women in extreme networks or organizations, such as terrorist groups dispelled the common assumption that women are lured into these dangerous environments solely to offer support while men are recruited and tend to be the key players.
MESO-BRAIN receives €3.3 million to replicate brain's neural networks through 3-D nanoprinting
MESO-BRAIN initiative receives €3.3 million to replicate brain's neural networks through 3-D nanoprinting.

Related Neural Networks Reading:

Neural Networks and Deep Learning: A Textbook
by Charu C. Aggarwal (Author)

Make Your Own Neural Network
by Tariq Rashid (Author)

Make Your Own Neural Network: An In-depth Visual Introduction For Beginners
by Michael Taylor (Author)

Fundamentals of Artificial Neural Networks (A Bradford Book)
by Mohamad Hassoun (Author)

Deep Learning (Adaptive Computation and Machine Learning series)
by Ian Goodfellow (Author), Yoshua Bengio (Author), Aaron Courville (Author), Francis Bach (Series Editor)

Deep Learning with Python
by Francois Chollet (Author)

Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurélien Géron (Author)

Neural Network Design (2nd Edition)
by Martin T Hagan (Author), Howard B Demuth (Author), Mark H Beale (Author), Orlando De Jesús (Author)

Neural Networks: A Practical Guide for Understanding and Programming Neural Networks and Useful Insights for Inspiring Reinvention

Introduction to the Math of Neural Networks
by Heaton Research, Inc.

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Bias And Perception
How does bias distort our thinking, our listening, our beliefs... and even our search results? How can we fight it? This hour, TED speakers explore ideas about the unconscious biases that shape us. Guests include writer and broadcaster Yassmin Abdel-Magied, climatologist J. Marshall Shepherd, journalist Andreas Ekström, and experimental psychologist Tony Salvador.
Now Playing: Science for the People

#513 Dinosaur Tails
This week: dinosaurs! We're discussing dinosaur tails, bipedalism, paleontology public outreach, dinosaur MOOCs, and other neat dinosaur related things with Dr. Scott Persons from the University of Alberta, who is also the author of the book "Dinosaurs of the Alberta Badlands".