Bluesky Facebook Reddit Email

Can AI be too good to use?

12.12.23 | University of California - Davis

Apple AirPods Pro (2nd Generation, USB-C)

Apple AirPods Pro (2nd Generation, USB-C) provide clear calls and strong noise reduction for interviews, conferences, and noisy field environments.

uch of the discussion around implementing artificial intelligence systems focuses on whether an AI application is “trustworthy”: Does it produce useful, reliable results, free of bias, while ensuring data privacy? But a new paper published Dec. 7 in Frontiers in Artificial Intelligence poses a different question: What if an AI is just too good?

Carrie Alexander, a postdoctoral researcher at the AI Institute for Next Generation Food Systems, or AIFS, at the University of California, Davis, interviewed a wide range of food industry stakeholders, including business leaders and academic and legal experts, on the attitudes of the food industry toward adopting AI. A notable issue was whether gaining extensive new knowledge about their operations might inadvertently create new liability risks and other costs.

For example, an AI system in a food business might reveal potential contamination with pathogens. Having that information could be a public benefit but also open the firm to future legal liability, even if the risk is very small.

“The technology most likely to benefit society as a whole may be the least likely to be adopted, unless new legal and economic structures are adopted,” Alexander said.

Alexander and co-authors Professor Aaron Smith of the UC Davis Department of Agricultural and Resource Economics and Professor Renata Ivanek of Cornell University, argue for a temporary “on-ramp” that would allow companies to begin using AI, while exploring the benefits, risks and ways to mitigate them. This would also give the courts, legislators and government agencies time to catch up and consider how best to use the information generated by AI systems in legal, political and regulatory decisions.

“We need ways for businesses to opt in and try out AI technology,” Alexander said. Subsidies, for example for digitizing existing records, might be helpful especially for small companies.

“We’re really hoping to generate more research and discussion on what could be a significant issue,” Alexander said. “It’s going to take all of us to figure it out.”

The work was supported in part by a grant from the USDA National Institute of Food and Agriculture. The AI Institute for Next Generation Food Systems is funded by a grant from USDA-NIFA and is one of 25 AI institutes established by the National Science Foundation in partnership with other agencies.

Frontiers in Artificial Intelligence

10.3389/frai.2023.1298604

Survey

People

Safer not to know? Shaping liability law and policy to incentivize adoption of predictive AI technologies in the food system

7-Dec-2023

All persons involved in the production of this manuscript are informed and familiar with the provided results and this publication. Financial interests: CA receives a salary as a postdoctoral researcher from AIFS. RI and AS receive partial support from AIFS for participation in its Food Safety, Data Privacy, and Socioeconomics and Ethics research. RI receives partial research support from CIDA. Non-financial interests: RI and AS serve on the Executive Committee for AIFS; RI is CIDA's co-director.

Keywords

Article Information

Contact Information

Andrew Fell
University of California - Davis
ahfell@ucdavis.edu

Source

How to Cite This Article

APA:
University of California - Davis. (2023, December 12). Can AI be too good to use?. Brightsurf News. https://www.brightsurf.com/news/LMJZRN4L/can-ai-be-too-good-to-use.html
MLA:
"Can AI be too good to use?." Brightsurf News, Dec. 12 2023, https://www.brightsurf.com/news/LMJZRN4L/can-ai-be-too-good-to-use.html.