Nav: Home

Researchers get humans to think like computers

March 22, 2019

Computers, like those that power self-driving cars, can be tricked into mistaking random scribbles for trains, fences and even school busses. People aren't supposed to be able to see how those images trip up computers but in a new study, Johns Hopkins University researchers show most people actually can.

The findings suggest modern computers may not be as different from humans as we think, and demonstrate how advances in artificial intelligence continue to narrow the gap between the visual abilities of people and machines. The research appears today in the journal Nature Communications.

"Most of the time, research in our field is about getting computers to think like people," says senior author Chaz Firestone, an assistant professor in Johns Hopkins' Department of Psychological and Brain Sciences. "Our project does the opposite -- we're asking whether people can think like computers."

What's easy for humans is often hard for computers. Artificial intelligence systems have long been better than people at doing math or remembering large quantities of information; but for decades humans have had the edge at recognizing everyday objects such as dogs, cats, tables or chairs. But recently, "neural networks" that mimic the brain have approached the human ability to identify objects, leading to technological advances supporting self-driving cars, facial recognition programs and helping physicians to spot abnormalities in radiological scans.

But even with these technological advances, there's a critical blind spot: It's possible to purposely make images that neural networks cannot correctly see. And these images, called "adversarial" or "fooling" images, are a big problem: Not only could they be exploited by hackers and causes security risks, but they suggest that humans and machines are actually seeing images very differently.

In some cases, all it takes for a computer to call an apple a car, is reconfiguring a pixel or two. In other cases, machines see armadillos and bagels in what looks like meaningless television static.

"These machines seem to be misidentifying objects in ways humans never would," Firestone says. "But surprisingly, nobody has really tested this. How do we know people can't see what the computers did?"

To test this, Firestone and lead author Zhenglong Zhou, a Johns Hopkins senior majoring in cognitive science, essentially asked people to "think like a machine". Machines have only a relatively small vocabulary for naming images. So, Firestone and Zhou showed people dozens of fooling images that had already tricked computers, and gave people the same kinds of labeling options that the machine had. In particular, they asked people which of two options the computer decided the object was - one being the computer's real conclusion and the other a random answer. (Was the blob pictured a bagel or a pinwheel?) It turns out, people strongly agreed with the conclusions of the computers.

People chose the same answer as computers 75 percent of the time. Perhaps even more remarkably, 98 percent of people tended to answer like the computers did.

Next, researchers upped the ante by giving people a choice between the computer's favorite answer and its next-best guess. (Was the blob pictured a bagel or a pretzel?) People again validated the computer's choices, with 91 percent of those tested agreeing with the machine's first choice.

Even when the researchers had people guess between 48 choices for what the object was, and even when the pictures resembled television static, an overwhelming proportion of the subjects chose what the machine chose well above the rates for random chance. A total of 1,800 subjects were tested throughout the various experiments.

"We found if you put a person in the same circumstance as a computer, suddenly the humans tend to agree with the machines," Firestone says. "This is still a problem for artificial intelligence, but it's not like the computer is saying something completely unlike what a human would say."
-end-


Johns Hopkins University

Related Artificial Intelligence Articles:

Applying artificial intelligence to science education
A new review published in the Journal of Research in Science Teaching highlights the potential of machine learning--a subset of artificial intelligence--in science education.
New roles for clinicians in the age of artificial intelligence
New Roles for Clinicians in the Age of Artificial Intelligence https://doi.org/10.15212/bioi-2020-0014 Announcing a new article publication for BIO Integration journal.
Artificial intelligence aids gene activation discovery
Scientists have long known that human genes are activated through instructions delivered by the precise order of our DNA.
Artificial intelligence recognizes deteriorating photoreceptors
A software based on artificial intelligence (AI), which was developed by researchers at the Eye Clinic of the University Hospital Bonn, Stanford University and University of Utah, enables the precise assessment of the progression of geographic atrophy (GA), a disease of the light sensitive retina caused by age-related macular degeneration (AMD).
Classifying galaxies with artificial intelligence
Astronomers have applied artificial intelligence (AI) to ultra-wide field-of-view images of the distant Universe captured by the Subaru Telescope, and have achieved a very high accuracy for finding and classifying spiral galaxies in those images.
Using artificial intelligence to smell the roses
A pair of researchers at the University of California, Riverside, has used machine learning to understand what a chemical smells like -- a research breakthrough with potential applications in the food flavor and fragrance industries.
Artificial intelligence could revolutionize sea ice warnings
Today, large resources are used to provide vessels in the polar seas with warnings about the spread of sea ice.
A hidden history of artificial intelligence in primary care
Artificial intelligence methods are being utilized in radiology, cardiology and other medical specialty fields to quickly and accurately process large quantities of health data to improve the diagnostic and treatment power of health care teams.
Identifying light sources using artificial intelligence
Identifying sources of light plays an important role in the development of many photonic technologies, such as lidar, remote sensing, and microscopy.
Artificial intelligence could serve as backup to radiologists' eyes
Deploying artificial intelligence could help radiologists to more accurately classify lung diseases.
More Artificial Intelligence News and Artificial Intelligence Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Listen Again: The Power Of Spaces
How do spaces shape the human experience? In what ways do our rooms, homes, and buildings give us meaning and purpose? This hour, TED speakers explore the power of the spaces we make and inhabit. Guests include architect Michael Murphy, musician David Byrne, artist Es Devlin, and architect Siamak Hariri.
Now Playing: Science for the People

#576 Science Communication in Creative Places
When you think of science communication, you might think of TED talks or museum talks or video talks, or... people giving lectures. It's a lot of people talking. But there's more to sci comm than that. This week host Bethany Brookshire talks to three people who have looked at science communication in places you might not expect it. We'll speak with Mauna Dasari, a graduate student at Notre Dame, about making mammals into a March Madness match. We'll talk with Sarah Garner, director of the Pathologists Assistant Program at Tulane University School of Medicine, who takes pathology instruction out of...
Now Playing: Radiolab

What If?
There's plenty of speculation about what Donald Trump might do in the wake of the election. Would he dispute the results if he loses? Would he simply refuse to leave office, or even try to use the military to maintain control? Last summer, Rosa Brooks got together a team of experts and political operatives from both sides of the aisle to ask a slightly different question. Rather than arguing about whether he'd do those things, they dug into what exactly would happen if he did. Part war game part choose your own adventure, Rosa's Transition Integrity Project doesn't give us any predictions, and it isn't a referendum on Trump. Instead, it's a deeply illuminating stress test on our laws, our institutions, and on the commitment to democracy written into the constitution. This episode was reported by Bethel Habte, with help from Tracie Hunte, and produced by Bethel Habte. Jeremy Bloom provided original music. Support Radiolab by becoming a member today at Radiolab.org/donate.     You can read The Transition Integrity Project's report here.