All that glitters is not gold: Misuse of AI by big tech can harm developing countries

August 27, 2020

Artificial Intelligence (AI) has generated considerable interest over the past few decades, owing to its promising applications across a wide range of fields. But, it has also sparked an ongoing debate on whether the risks of using AI outweigh its benefits. The biggest concern with AI is a lack of governance, which gives large companies (popularly called as the "Big Tech") unlimited access to private data. Multiple scandals in the recent past have confirmed the threats of this--such as the infamous Cambridge Analytica scandal of 2018, in which Facebook conducted a major privacy breach by leaking confidential user information to a data-mining company. Moreover, although AI should be developed in a socially responsible way, governments often do not impose strict legislations on AI development, which may be detrimental--rather than beneficial--to the society.

In a new study published in Sustainable Development, Dr Jon Truby of Qatar University talks about how unregulated AI is a threat to the Sustainable Development Goals (SDGs)--a set of guidelines created by the United Nations (UN) for the sustainable development of all countries. Dr Truby points out that this threat is especially prevalent in developing nations, which often relax AI regulations to attract investments from the Big Tech. Dr Truby explains, "In this study, I propose the need for proactive regulatory measures in AI development, which would help to ensure that AI operates to benefit sustainable development."

In his study, Dr Truby discusses three examples to show how unregulated AI can be detrimental to SDGs. To begin with, he focuses on SDG 16, a goal that was developed to tackle corruption, organized crime, and terrorism. He explains that because AI is commonly used in national security databases, it can be misused by criminals to launder money or organize crime. This is especially relevant in developing countries, where input data may be easily accessible because of poor protective measures. Dr Truby suggests that, to prevent this, there should be a risk assessment at each stage of AI development. Moreover, the AI software should be designed such that it is inaccessible when there is a threat of it being hacked. Such restrictions can minimize the risk of hackers obtaining access to the software.

Then, Dr Truby takes the example of SDG 8, a goal that seeks to increase public access to financial services. AI is regularly used in financial institutions to make banking simpler and more efficient. But, while learning, AI might inadvertently develop certain biases, such as reducing financial opportunities for certain minorities. Dr Truby explains that to avoid such biases, we need transparency in AI-driven processes. Human review and intervention at each step can ensure that such discrimination does not go unnoticed. Moreover, it is necessary to train software developers to recognize the harmful implications of biases, so that it can be regulated more efficiently.

Finally, Dr Truby explains how AI is a threat to SDG 10, a goal that focuses on equal opportunity. He explains how AI can be used by big firms to generate employment opportunities in developing countries and that this might threaten smaller businesses and local companies. However, if designed with sustainable development in mind, AI can create better job opportunities and increase productivity by removing labor-intensive jobs.

Inarguably, AI is a powerful technology that needs to be used carefully and efficiently. Although Dr Truby is optimistic about the future implications of AI, he believes that developers and legislators should exercise caution through effective governance. He concludes, "The risks of AI to the society and the possible detriments to sustainable development can be severe if not managed correctly. On the flip side, regulating AI can be immensely beneficial to development, leading to people being more productive and more satisfied with their employment and opportunities."
-end-
About Dr Jon Truby

Dr Jon Truby is an Associate Professor of Law and the Director of the Centre for Law and Development at the College of Law, Qatar University. His research interests include policy issues related to Artificial Intelligence, financial technology, and the UN Sustainable Development Goals. He was a speaker at the UNGA panel in 2018, where he talked about digital currency sustainability concerns because of the huge environmental impact of Bitcoin and other Blockchain technologies. His current research focuses on the risks of AI to sustainable development.

Qatar University, College of Law

Related Artificial Intelligence Articles from Brightsurf:

Physics can assist with key challenges in artificial intelligence
Two challenges in the field of artificial intelligence have been solved by adopting a physical concept introduced a century ago to describe the formation of a magnet during a process of iron bulk cooling.

A survey on artificial intelligence in chest imaging of COVID-19
Announcing a new article publication for BIO Integration journal. In this review article the authors consider the application of artificial intelligence imaging analysis methods for COVID-19 clinical diagnosis.

Using artificial intelligence can improve pregnant women's health
Disorders such as congenital heart birth defects or macrosomia, gestational diabetes and preterm birth can be detected earlier when artificial intelligence is used.

Artificial intelligence (AI)-aided disease prediction
Artificial Intelligence (AI)-aided Disease Prediction https://doi.org/10.15212/bioi-2020-0017 Announcing a new article publication for BIO Integration journal.

Artificial intelligence dives into thousands of WW2 photographs
In a new international cross disciplinary study, researchers from Aarhus University, Denmark and Tampere University, Finland have used artificial intelligence to analyse large amounts of historical photos from WW2.

Applying artificial intelligence to science education
A new review published in the Journal of Research in Science Teaching highlights the potential of machine learning--a subset of artificial intelligence--in science education.

New roles for clinicians in the age of artificial intelligence
New Roles for Clinicians in the Age of Artificial Intelligence https://doi.org/10.15212/bioi-2020-0014 Announcing a new article publication for BIO Integration journal.

Artificial intelligence aids gene activation discovery
Scientists have long known that human genes are activated through instructions delivered by the precise order of our DNA.

Artificial intelligence recognizes deteriorating photoreceptors
A software based on artificial intelligence (AI), which was developed by researchers at the Eye Clinic of the University Hospital Bonn, Stanford University and University of Utah, enables the precise assessment of the progression of geographic atrophy (GA), a disease of the light sensitive retina caused by age-related macular degeneration (AMD).

Classifying galaxies with artificial intelligence
Astronomers have applied artificial intelligence (AI) to ultra-wide field-of-view images of the distant Universe captured by the Subaru Telescope, and have achieved a very high accuracy for finding and classifying spiral galaxies in those images.

Read More: Artificial Intelligence News and Artificial Intelligence Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.