Nav: Home

Computer bots are more like humans than you might think, having fights lasting years

February 23, 2017

Researchers say 'benevolent bots', otherwise known as software robots, that are designed to improve articles on Wikipedia sometimes have online 'fights' over content that can continue for years. Editing bots on Wikipedia undo vandalism, enforce bans, check spelling, create links and import content automatically, whereas other bots (which are non-editing) can mine data, identify data or identify copyright infringements. The team analysed how much they disrupted Wikipedia, observing how they interacted on 13 different language editions over ten years (from 2001 to 2010). They found that bots interacted with one another, whether or not this was by design, and it led to unpredictable consequences. The research paper, published in PLOS ONE, concludes that bots are more like humans than you might expect. Bots appear to behave differently in culturally distinct online environments. The paper says the findings are a warning to those using artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media. It suggests that scientists may have to devote more attention to bots' diverse social life and their different cultures.

The research paper by the University of Oxford and the Alan Turing Institute in the UK explains that although the online world has become an ecosystem of bots, our knowledge of how they interact with each other is still rather poor. Although bots are automatons that do not have the capacity for emotions, bot to bot interactions are unpredictable and act in distinctive ways. It finds that German editions of Wikipedia had fewest conflicts between bots, with each undoing another's edits 24 times, on average, over ten years. This shows relative efficiency, says the research paper, when compared with bots on the Portuguese Wikipedia edition, which undid another bot's edits 185 times, on average, over ten years. Bots on English Wikipedia undid another bot's work 105 times, on average, over ten years, three times the rate of human reverts, says the paper.

The findings show that even simple autonomous algorithms can produce complex interactions that result in unintended consequences - 'sterile fights' that may continue for years, or reach deadlock in some cases. The paper says while bots constitute a tiny proportion (0.1%) of Wikipedia editors, they stand behind a significant proportion of all edits. Although such conflicts represent a small proportion of the bots' overall editorial activity, these findings are significant in highlighting their unpredictability and complexity. Smaller language editions, such as the Polish Wikipedia, have far more content created by bots than the large language editions, such as English Wikipedia.

Lead author Dr Milena Tsvetkova, from the Oxford Internet Institute, said: 'We find that bots behave differently in different cultural environments and their conflicts are also very different to the ones between human editors. This has implications not only for how we design artificial agents but also for how we study them. We need more research into the sociology of bots.'

The paper was co-authored by the principal investigator of the EC-Horizon2020-funded project, HUMANE, Professor Taha Yasseri, also from the Oxford Internet Institute. He added: 'The findings show that even the same technology leads to different outcomes depending on the cultural environment. An automated vehicle will drive differently on a German autobahn to how it will through the Tuscan hills of Italy. Similarly, the local online infrastructure that bots inhabit will have some bearing on how they behave and their performance. Bots are designed by humans from different countries so when they encounter one another, this can lead to online clashes. We see differences in the technology used in the different Wikipedia language editions and the different cultures of the communities of Wikipedia editors involved create complicated interactions. This complexity is a fundamental feature that needs to be considered in any conversation related to automation and artificial intelligence.'

Professor Luciano Floridi, also an author of the paper, remarked: 'We tend to forget that coordination even among collaborative agents is often achieved only through frameworks of rules that facilitate the wanted outcomes. This infrastructural ethics or infra-ethics needs to be designed as much and as carefully as the agents that inhabit it.'

The research finds that the number of reverts is smaller for bots than for humans, but the bots' behaviour is more varied and conflicts involving bots last longer and are triggered later. The average time between successive reverts for humans is 2 minutes, then 24 hours or one year, says the paper. The first bot to bot revert happened a month later, on average, but further reverts often continued for years. The paper suggests that humans use automatic tools that report live changes and can react more quickly, whereas bots systematically crawl over web articles and they can be restricted on the number of edits allowed. The fact that bots' conflicts are longstanding also flags that humans are failing to spot the problems early enough, suggests the paper.
-end-
For more information, contact the University of Oxford News Office on +44 (0) 1865 280534 or email: news.office@admin.ox.ac.uk

Notes for Editors


*The paper, 'Even Good Bots Fight: The Case of Wikipedia', is by Milena Tsvetkova, Ruth Garcia-Gavilanes, Luciano Floridi, and Taha Yasseri.

*It will be published in PLOS ONE on Thursday, February 23, 2017 at 2 pm (Eastern Time Once live, the article can be found at: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0171774*The team identified bots' editorial activity mainly through Wikipedia's own flagging system, which requires human editors to create separate accounts for bots. Bot account names have to show that the author is a bot. The researchers also trawled the Wikipedia API pages (Application Programming Interface), checking the User page for each suspected bot account to link bots to their human owners. They modelled interactions between pairs of bots involved in successive reverts as trajectories that trace who reverts more over time. Then, they used simple machine learning techniques to identify different types of trajectories and analyse how commonly they occur for bots versus humans, and across the different language editions of Wikipedia.

*Bots vary from web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content editing in online collaboration communities.

University of Oxford

Related Artificial Intelligence Articles:

A hidden history of artificial intelligence in primary care
Artificial intelligence methods are being utilized in radiology, cardiology and other medical specialty fields to quickly and accurately process large quantities of health data to improve the diagnostic and treatment power of health care teams.
Identifying light sources using artificial intelligence
Identifying sources of light plays an important role in the development of many photonic technologies, such as lidar, remote sensing, and microscopy.
Artificial intelligence could serve as backup to radiologists' eyes
Deploying artificial intelligence could help radiologists to more accurately classify lung diseases.
Reducing the carbon footprint of artificial intelligence
MIT system cuts the energy required for training and running neural networks.
Researchers rebuild the bridge between neuroscience and artificial intelligence
In an article in the journal Scientific Reports, researchers reveal that they have successfully rebuilt the bridge between experimental neuroscience and advanced artificial intelligence learning algorithms.
Artificial intelligence can help some businesses but may not work for others
The temptation for businesses to use artificial intelligence and other technology to improve performance, drive down labor costs, and better the bottom line is understandable.
Artificial intelligence could help predict future diabetes cases
A type of artificial intelligence called machine learning can help predict which patients will develop diabetes, according to an ENDO 2020 abstract that will be published in a special supplemental section of the Journal of the Endocrine Society.
Artificial intelligence for very young brains
Montreal's CHU Sainte-Justine children's hospital and the ÉTS engineering school pool their expertise to develop an innovative new technology for the segmentation of neonatal brain images.
Putting artificial intelligence to work in the lab
An Australian-German collaboration has demonstrated fully-autonomous SPM operation, applying artificial intelligence and deep learning to remove the need for constant human supervision.
Composing new proteins with artificial intelligence
Scientists have long studied how to improve proteins or design new ones.
More Artificial Intelligence News and Artificial Intelligence Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

TED Radio Wow-er
School's out, but many kids–and their parents–are still stuck at home. Let's keep learning together. Special guest Guy Raz joins Manoush for an hour packed with TED science lessons for everyone.
Now Playing: Science for the People

#565 The Great Wide Indoors
We're all spending a bit more time indoors this summer than we probably figured. But did you ever stop to think about why the places we live and work as designed the way they are? And how they could be designed better? We're talking with Emily Anthes about her new book "The Great Indoors: The Surprising Science of how Buildings Shape our Behavior, Health and Happiness".
Now Playing: Radiolab

The Third. A TED Talk.
Jad gives a TED talk about his life as a journalist and how Radiolab has evolved over the years. Here's how TED described it:How do you end a story? Host of Radiolab Jad Abumrad tells how his search for an answer led him home to the mountains of Tennessee, where he met an unexpected teacher: Dolly Parton.Jad Nicholas Abumrad is a Lebanese-American radio host, composer and producer. He is the founder of the syndicated public radio program Radiolab, which is broadcast on over 600 radio stations nationwide and is downloaded more than 120 million times a year as a podcast. He also created More Perfect, a podcast that tells the stories behind the Supreme Court's most famous decisions. And most recently, Dolly Parton's America, a nine-episode podcast exploring the life and times of the iconic country music star. Abumrad has received three Peabody Awards and was named a MacArthur Fellow in 2011.