Nav: Home

SwRI engineers develop novel techniques to trick object detection systems

April 04, 2019

SAN ANTONIO -- April 4, 2019 -- New adversarial techniques developed by engineers at Southwest Research Institute can make objects "invisible" to image detection systems that use deep-learning algorithms. These techniques can also trick systems into thinking they see another object or can change the location of objects. The technique mitigates the risk for compromise in automated image processing systems.

"Deep-learning neural networks are highly effective at many tasks," says Research Engineer Abe Garza of the SwRI Intelligent Systems Division. "However, deep learning was adopted so quickly that the security implications of these algorithms weren't fully considered."

Deep-learning algorithms excel at using shapes and color to recognize the differences between humans and animals or cars and trucks, for example. These systems reliably detect objects under an array of conditions and, as such, are used in myriad applications and industries, often for safety-critical uses. The automotive industry uses deep-learning object detection systems on roadways for lane-assist, lane-departure and collision-avoidance technologies. These vehicles rely on cameras to detect potentially hazardous objects around them. While the image processing systems are vital for protecting lives and property, the algorithms can be deceived by parties intent on causing harm.

Security researchers working in "adversarial learning" are finding and documenting vulnerabilities in deep- and other machine-learning algorithms. Using SwRI internal research funds, Garza and Senior Research Engineer David Chambers developed what look like futuristic, Bohemian-style patterns. When worn by a person or mounted on a vehicle, the patterns trick object detection cameras into thinking the objects aren't there, that they're something else or that they're in another location. Malicious parties could place these patterns near roadways, potentially creating chaos for vehicles equipped with object detectors.

"These patterns cause the algorithms in the camera to either misclassify or mislocate objects, creating a vulnerability," said Garza. "We call these patterns 'perception invariant' adversarial examples because they don't need to cover the entire object or be parallel to the camera to trick the algorithm. The algorithms can misclassify the object as long as they sense some part of the pattern."

While they might look like unique and colorful displays of art to the human eye, these patterns are designed in such a way that object-detection camera systems see them very specifically. A pattern disguised as an advertisement on the back of a stopped bus could make a collision-avoidance system think it sees a harmless shopping bag instead of the bus. If the vehicle's camera fails to detect the true object, it could continue moving forward and hit the bus, causing a potentially serious collision.

"The first step to resolving these exploits is to test the deep-learning algorithms," said Garza. The team has created a framework capable of repeatedly testing these attacks against a variety of deep-learning detection programs, which will be extremely useful for testing solutions.

SwRI researchers continue to evaluate how much, or how little, of the pattern is needed to misclassify or mislocate an object. Working with clients, this research will allow the team to test object detection systems and ultimately improve the security of deep-learning algorithms.
-end-
To see how object detection cameras view the patterns, watch our video on YouTube at https://youtu.be/ylbVMMR4Eqg.

For more information on adversarial techniques for deep learning and machine learning, visit https://www.swri.org/perception-technologies-dynamic-environments.

Southwest Research Institute

Related Algorithms Articles:

Does my algorithm work? There's no shortcut for community detection
Community detection is an important tool for scientists studying networks, but a new paper published in Science Advances calls into question the common practice of using metadata for ground truth validation.
A turbo engine for tracing neurons
Putting a turbo engine into an old car gives it an entirely new life -- suddenly it can go further, faster.
Algorithms can exploit human perception in graph design
Researchers have recently found an algorithmic approach to automatically improve the design of scatterplots by exploiting models and measures of human perception.
Researchers add human intuition to planning algorithms
Researchers from MIT are trying to improve automated planners by giving them the benefit of human intuition.
New algorithms may revolutionize drug discoveries -- and our understanding of life
A new set of machine learning algorithms developed by U of T researchers that can generate 3-D structures of tiny protein molecules may revolutionize the development of drug therapies for a range of diseases, from Alzheimer's to cancer.
UAV performs first ever perched landing using machine learning algorithms
The very first unmanned aerial vehicle (UAV) to perform a perched landing using machine learning algorithms has been developed in partnership with the University of Bristol and BMT Defence Services.
Control algorithms could keep sensor-laden balloons afloat in hurricanes for a week
Controls engineers at UC San Diego have developed practical strategies for building and coordinating scores of sensor-laden balloons within hurricanes.
Game theory can help protect against terrorist attacks
A new article explains how game theory and algorithms are being used to optimize security and patrol schedules to prevent terrorist attacks.
Mount Sinai researchers use computer algorithms to diagnose HCM from echos
Computer algorithms can automatically interpret echocardiographic images and distinguish between pathological hypertrophic cardiomyopathy (HCM) and physiological changes in athletes' hearts, according to research from the Icahn School of Medicine at Mount Sinai
Driverless cars, golf carts, now joined by autonomous scooter
At MIT's 2016 Open House last spring, more than 100 visitors took rides on an autonomous mobility scooter in a trial of software designed by researchers from MIT's Computer Science and Artificial Intelligence Laboratory, the National University of Singapore, and the Singapore-MIT Alliance for Research and Technology.

Related Algorithms Reading:

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Anthropomorphic
Do animals grieve? Do they have language or consciousness? For a long time, scientists resisted the urge to look for human qualities in animals. This hour, TED speakers explore how that is changing. Guests include biological anthropologist Barbara King, dolphin researcher Denise Herzing, primatologist Frans de Waal, and ecologist Carl Safina.
Now Playing: Science for the People

#532 A Class Conversation
This week we take a look at the sociology of class. What factors create and impact class? How do we try and study it? How does class play out differently in different countries like the US and the UK? How does it impact the political system? We talk with Daniel Laurison, Assistant Professor of Sociology at Swarthmore College and coauthor of the book "The Class Ceiling: Why it Pays to be Privileged", about class and its impacts on people and our systems.