Bluesky Facebook Reddit Email

World leaders still need to wake up to AI risks, say leading experts ahead of AI Safety Summit

05.20.24 | University of Oxford

Kestrel 3000 Pocket Weather Meter

Kestrel 3000 Pocket Weather Meter measures wind, temperature, and humidity in real time for site assessments, aviation checks, and safety briefings.

More information, including a copy of the paper, can be found online at the Science press package at https://www.eurekalert.org/press/scipak/ , or can be requested from scipak@aaas.org

Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago.

Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21-22 May) approaches, twenty-five of the world's leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published today in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies.

Professor Philip Torr , Department of Engineering Science, University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”

World’s response not on track in face of potentially rapid AI progress;

According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems---outperforming human abilities across many critical domains---will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts.

Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.

World-leading AI experts issue call to action

In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.

This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.

Urgent priorities for AI governance

The authors recommend governments to:

According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.

AI impacts could be catastrophic

AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.

Stuart Russell OBE, Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations—that "regulation stifles innovation." That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”

Notes for editors:

The paper ‘Managing extreme AI risks amid rapid progress’ will be published in Science at 19:00 BST/ 14:00 ET Monday 20 May 2024 doi: 10.1126/science.adn0117. Link: http://www.science.org/doi/10.1126/science.adn0117

To view a copy of the paper before this contact scipak@aaas.org or see the Science press package at https://www.eurekalert.org/press/scipak/ .


The following lists interview availability, a list of notable co-authors, and additional quotes from the authors.

Interviews

For coordination purposes, you can contact study co-authors Jan Brauner and Sören Mindermann. We are also available for interviews, but you may prefer to interview our more senior co-authors (see below)

The following senior co-authors have agreed to be available for interviews:

Stuart Russell: Professor in AI at UC Berkeley, author of the standard textbook on AI

Gillian Hadfield: CIFAR AI Chair and Director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto

Jeff Clune: CIFAR AI Chair, Professor in AI at University of British Columbia, one of the leading researchers in reinforcement learning

Tegan Maharaj: Assistant Professor in AI at the University of Toronto

Notable co-authors:

Additional quotes from the authors:

Philip Torr , Professor in AI, University of Oxford:

Dawn Song: Professor in AI at UC Berkeley, most-cited researcher in AI security and privacy:

Yuval Noah Harari, Professor of history at Hebrew University of Jerusalem, best-selling author of ‘Sapiens’ and ‘Homo Deus’, world leading public intellectual:

Jeff Clune , Professor in AI at University of British Columbia and one of the leading researchers in reinforcement learning:

Gillian Hadfield, CIFAR AI Chair and Director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto:

“AI labs need to walk the walk when it comes to safety. But they’re spending far less on safety than they spend on creating more capable AI systems. Spending one-third on ensuring safety and ethical use should be the minimum.”

Sheila McIlrath , Professor in AI, University of Toronto, Vector Institute:

Frank Hutter, Professor in AI at the University of Freiburg, Head of the ELLIS Unit Freiburg, 3x ERC grantee:

About the University of Oxford

Oxford University has been placed number 1 in the Times Higher Education World University Rankings for the eighth year running, and ​number 3 in the QS World Rankings 2024. At the heart of this success are the twin-pillars of our ground-breaking research and innovation and our distinctive educational offer.

Oxford is world-famous for research and teaching excellence and home to some of the most talented people from across the globe. Our work helps the lives of millions, solving real-world problems through a huge network of partnerships and collaborations. The breadth and interdisciplinary nature of our research alongside our personalised approach to teaching sparks imaginative and inventive insights and solutions.

Through its research commercialisation arm, Oxford University Innovation, Oxford is the highest university patent filer in the UK and is ranked first in the UK for university spinouts, having created more than 300 new companies since 1988. Over a third of these companies have been created in the past five years. The university is a catalyst for prosperity in Oxfordshire and the United Kingdom, contributing £15.7 billion to the UK economy in 2018/19, and supports more than 28,000 full time jobs.

Science

10.1126/science.adn0117

Managing extreme AI risks amid rapid progress

20-May-2024

Keywords

Article Information

Contact Information

Caroline Wood
University of Oxford
caroline.wood@admin.ox.ac.uk
Jan Brauner
University of Oxford
jan.m.brauner@gmail.com

How to Cite This Article

APA:
University of Oxford. (2024, May 20). World leaders still need to wake up to AI risks, say leading experts ahead of AI Safety Summit. Brightsurf News. https://www.brightsurf.com/news/8OMNOO31/world-leaders-still-need-to-wake-up-to-ai-risks-say-leading-experts-ahead-of-ai-safety-summit.html
MLA:
"World leaders still need to wake up to AI risks, say leading experts ahead of AI Safety Summit." Brightsurf News, May. 20 2024, https://www.brightsurf.com/news/8OMNOO31/world-leaders-still-need-to-wake-up-to-ai-risks-say-leading-experts-ahead-of-ai-safety-summit.html.