Bluesky Facebook Reddit Email

New technique helps AI tell when humans are lying

03.18.24 | North Carolina State University

AmScope B120C-5M Compound Microscope

AmScope B120C-5M Compound Microscope supports teaching labs and QA checks with LED illumination, mechanical stage, and included 5MP camera.

Researchers have developed a new training tool to help artificial intelligence (AI) programs better account for the fact that humans don’t always tell the truth when providing personal information. The new tool was developed for use in contexts when humans have an economic incentive to lie, such as applying for a mortgage or trying to lower their insurance premiums.

“AI programs are used in a wide variety of business contexts, such as helping to determine how large of a mortgage an individual can afford, or what an individual’s insurance premiums should be,” says Mehmet Caner, co-author of a paper on the work. “These AI programs generally use mathematical algorithms driven solely by statistics to do their forecasting. But the problem is that this approach creates incentives for people to lie, so that they can get a mortgage, lower their insurance premiums, and so on.

“We wanted to see if there was some way to adjust AI algorithms in order to account for these economic incentives to lie,” says Caner, who is the Thurman-Raytheon Distinguished Professor of Economics in North Carolina State University’s Poole College of Management.

To address this challenge, the researchers developed a new set of training parameters that can be used to inform how the AI teaches itself to make predictions. Specifically, the new training parameters focus on recognizing and accounting for a human user’s economic incentives. In other words, the AI trains itself to recognize circumstances in which a human user might lie to improve their outcomes.

In proof-of-concept simulations, the modified AI was better able to detect inaccurate information from users.

“This effectively reduces a user’s incentive to lie when submitting information,” Caner says. “However, small lies can still go undetected. We need to do some additional work to better understand where the threshold is between a ‘small lie’ and a ‘big lie.’”

The researchers are making the new AI training parameters publicly available, so that AI developers can experiment with them.

“This work shows we can improve AI programs to reduce economic incentives for humans to lie,” Caner says. “At some point, if we make the AI clever enough, we may be able to eliminate those incentives altogether.”

The paper, “ Should Humans Lie to Machines? The Incentive Compatibility of Lasso and GLM Structured Sparsity Estimators ,” is published in the Journal of Business & Economic Statistics . The paper was co-authored by Kfir Eliaz of Tel-Aviv University and the University of Utah.

Journal of Business and Economic Statistics

10.1080/07350015.2024.2316102

Computational simulation/modeling

Not applicable

Should Humans Lie to Machines? The Incentive Compatibility of Lasso and GLM Structured Sparsity Estimators

12-Mar-2024

none

Keywords

Article Information

Contact Information

Matt Shipman
North Carolina State University
matt_shipman@ncsu.edu

Source

How to Cite This Article

APA:
North Carolina State University. (2024, March 18). New technique helps AI tell when humans are lying. Brightsurf News. https://www.brightsurf.com/news/1WR379WL/new-technique-helps-ai-tell-when-humans-are-lying.html
MLA:
"New technique helps AI tell when humans are lying." Brightsurf News, Mar. 18 2024, https://www.brightsurf.com/news/1WR379WL/new-technique-helps-ai-tell-when-humans-are-lying.html.