Nav: Home

SwRI engineers develop novel techniques to trick object detection systems

April 04, 2019

SAN ANTONIO -- April 4, 2019 -- New adversarial techniques developed by engineers at Southwest Research Institute can make objects "invisible" to image detection systems that use deep-learning algorithms. These techniques can also trick systems into thinking they see another object or can change the location of objects. The technique mitigates the risk for compromise in automated image processing systems.

"Deep-learning neural networks are highly effective at many tasks," says Research Engineer Abe Garza of the SwRI Intelligent Systems Division. "However, deep learning was adopted so quickly that the security implications of these algorithms weren't fully considered."

Deep-learning algorithms excel at using shapes and color to recognize the differences between humans and animals or cars and trucks, for example. These systems reliably detect objects under an array of conditions and, as such, are used in myriad applications and industries, often for safety-critical uses. The automotive industry uses deep-learning object detection systems on roadways for lane-assist, lane-departure and collision-avoidance technologies. These vehicles rely on cameras to detect potentially hazardous objects around them. While the image processing systems are vital for protecting lives and property, the algorithms can be deceived by parties intent on causing harm.

Security researchers working in "adversarial learning" are finding and documenting vulnerabilities in deep- and other machine-learning algorithms. Using SwRI internal research funds, Garza and Senior Research Engineer David Chambers developed what look like futuristic, Bohemian-style patterns. When worn by a person or mounted on a vehicle, the patterns trick object detection cameras into thinking the objects aren't there, that they're something else or that they're in another location. Malicious parties could place these patterns near roadways, potentially creating chaos for vehicles equipped with object detectors.

"These patterns cause the algorithms in the camera to either misclassify or mislocate objects, creating a vulnerability," said Garza. "We call these patterns 'perception invariant' adversarial examples because they don't need to cover the entire object or be parallel to the camera to trick the algorithm. The algorithms can misclassify the object as long as they sense some part of the pattern."

While they might look like unique and colorful displays of art to the human eye, these patterns are designed in such a way that object-detection camera systems see them very specifically. A pattern disguised as an advertisement on the back of a stopped bus could make a collision-avoidance system think it sees a harmless shopping bag instead of the bus. If the vehicle's camera fails to detect the true object, it could continue moving forward and hit the bus, causing a potentially serious collision.

"The first step to resolving these exploits is to test the deep-learning algorithms," said Garza. The team has created a framework capable of repeatedly testing these attacks against a variety of deep-learning detection programs, which will be extremely useful for testing solutions.

SwRI researchers continue to evaluate how much, or how little, of the pattern is needed to misclassify or mislocate an object. Working with clients, this research will allow the team to test object detection systems and ultimately improve the security of deep-learning algorithms.
-end-
To see how object detection cameras view the patterns, watch our video on YouTube at https://youtu.be/ylbVMMR4Eqg.

For more information on adversarial techniques for deep learning and machine learning, visit https://www.swri.org/perception-technologies-dynamic-environments.

Southwest Research Institute

Related Algorithms Articles:

Synergy emergence in deep reinforcement motor learning
Human motor control has always been efficient at executing complex movements naturally, efficiently, and without much thought involved.
Machine learning could improve the diagnosis of mastitis infections in cows
Artificial intelligence could help vets to more accurately diagnose the origin of mastitis on dairy herds, according to a new study from experts at the University of Nottingham.
How a new quantum approach can develop faster algorithms to deduce complex networks
Complex networks are ubiquitous in the real world, from artificial to purely natural ones, and they exhibit very similar geometric properties.
Algorithms 'consistently' more accurate than people in predicting recidivism, study says
In a study with potentially far-reaching implications for criminal justice in the United States, a team of California researchers has found that algorithms are significantly more accurate than humans in predicting which defendants will later be arrested for a new crime.
AI for #MeToo: Training algorithms to spot online trolls
Machine learning could be a powerful tool for allowing social media platforms to spot online trolls.
Finally, machine learning interprets gene regulation clearly
A new brand of artificial neural network has solved an interpretability problem that has frustrated biologists.
Developing a new AI breast cancer diagnostic tool
Scientists are developing a new way to identify the unique chemical 'fingerprints' for different types of breast cancers.
Artificial intelligence-based algorithm for intensive care of traumatic brain injury
A recent Finnish study, published in Scientific Reports, presents the first artificial intelligence (AI) based algorithm that may be utilized in the intensive care unit for treating patients with severe traumatic brain injury.
New algorithms train AI to avoid specific bad behaviors
Robots, self-driving cars and other intelligent machines could become better-behaved if machine-learning designers adopt a new framework for building AI with safeguards against specific undesirable outcomes.
New machine learning algorithms offer safety and fairness guarantees
Writing in Science, Thomas and his colleagues Yuriy Brun, Andrew Barto and graduate student Stephen Giguere at UMass Amherst, Bruno Castro da Silva at the Federal University of Rio Grande del Sol, Brazil, and Emma Brunskill at Stanford University this week introduce a new framework for designing machine learning algorithms that make it easier for users of the algorithm to specify safety and fairness constraints.
More Algorithms News and Algorithms Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Listen Again: Reinvention
Change is hard, but it's also an opportunity to discover and reimagine what you thought you knew. From our economy, to music, to even ourselves–this hour TED speakers explore the power of reinvention. Guests include OK Go lead singer Damian Kulash Jr., former college gymnastics coach Valorie Kondos Field, Stockton Mayor Michael Tubbs, and entrepreneur Nick Hanauer.
Now Playing: Science for the People

#562 Superbug to Bedside
By now we're all good and scared about antibiotic resistance, one of the many things coming to get us all. But there's good news, sort of. News antibiotics are coming out! How do they get tested? What does that kind of a trial look like and how does it happen? Host Bethany Brookeshire talks with Matt McCarthy, author of "Superbugs: The Race to Stop an Epidemic", about the ins and outs of testing a new antibiotic in the hospital.
Now Playing: Radiolab

Dispatch 6: Strange Times
Covid has disrupted the most basic routines of our days and nights. But in the middle of a conversation about how to fight the virus, we find a place impervious to the stalled plans and frenetic demands of the outside world. It's a very different kind of front line, where urgent work means moving slow, and time is marked out in tiny pre-planned steps. Then, on a walk through the woods, we consider how the tempo of our lives affects our minds and discover how the beats of biology shape our bodies. This episode was produced with help from Molly Webster and Tracie Hunte. Support Radiolab today at Radiolab.org/donate.