Bluesky Facebook Reddit Email

New AI framework aims to remove bias in key areas such as health, education, and recruitment

02.18.25 | Universidad de Navarra

SAMSUNG T9 Portable SSD 2TB

SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.


Researchers from the Data Science and Artificial Intelligence Institute (DATAI) of the University of Navarr a (Spain) have published an innovative methodology that improves the fairness and reliability of artificial intelligence models used in critical decision-making. These decisions significantly impact people's lives or the operations of organizations, as occurs in areas such as health, education, justice, or human resources.

The team, formed by researchers Alberto García Galindo, Marcos López De Castro and Rubén Armañanzas Arnedillo, has developed a new theoretical framework that optimizes the parameters of reliable machine learning models. These models are AI algorithms that transparently make predictions, ensuring certain confidence levels. In this contribution, the researchers propose a methodology able to reduce inequalities related to sensitive attributes such as race, gender, or socioeconomic status.

Machine Learning , one of the leading scientific journals in artificial intelligence and machine learning, presents this study. It combines advanced prediction techniques (conformal prediction) with algorithms inspired by natural evolution (evolutionary learning). The derived algorithms offer rigorous confidence levels and ensure equitable coverage among different social and demographic groups. Thus, this new AI framework provides the same reliability level regardless of individuals' characteristics, ensuring fair and unbiased results.

"The widespread use of artificial intelligence in sensitive fields has raised ethical concerns due to possible algorithmic discriminations," explains Armañanzas Arnedillo, principal investigator of DATAI at the University of Navarra. "Our approach enables businesses and public policymakers to choose models that balance efficiency and fairness according to their needs, or responding to emerging regulations. This breakthrough is part of the University of Navarra's commitment to fostering a responsible AI culture and promoting ethical and transparent use of this technology.”

Application in real scenarios

Researchers tested this method on four benchmark datasets with different characteristics from real-world domains related to economic income, criminal recidivism, hospital readmission, and school applications. The results showed that the new prediction algorithms significantly reduced inequalities without compromising the accuracy of the predictions. "In our analysis, we found, for example, striking biases in the prediction of school admissions, evidencing a significant lack of fairness based on family financial status," notes Alberto García Galindo, DATAI predoctoral researcher at the University of Navarra and first author of the paper. "In turn, these experiments demonstrated that, on many occasions, our methodology manages to reduce such biases without compromising the model's predictive ability. Specifically, with our model, we found solutions in which discrimination was practically completely reduced while maintaining prediction accuracy." The methodology offers a 'Pareto front' of optimal algorithms, "which allows us to visualize the best available options according to priorities and to understand, for each case, how algorithmic fairness and accuracy are related".

According to the researchers, this innovation has vast potential in sectors where AI must support reliable and ethical critical decision-making. Garcia Galindo points out that their method "not only contributes to fairness but also enables a deeper understanding of how the configuration of models influences the results, which could guide future research in the regulation of AI algorithms." The researchers have made the code and data from the study publicly available to encourage further research applications and transparency in this emerging field.

Machine Learning

10.1007/s10994-024-06721-w

Meta-analysis

Not applicable

Fair prediction sets through multi-objective hyperparameter optimization

17-Jan-2025

Keywords

Article Information

Contact Information

Miriam Salcedo
Universidad de Navarra
miriamsalcedo@unav.es

Source

How to Cite This Article

APA:
Universidad de Navarra. (2025, February 18). New AI framework aims to remove bias in key areas such as health, education, and recruitment. Brightsurf News. https://www.brightsurf.com/news/8X5Q4PP1/new-ai-framework-aims-to-remove-bias-in-key-areas-such-as-health-education-and-recruitment.html
MLA:
"New AI framework aims to remove bias in key areas such as health, education, and recruitment." Brightsurf News, Feb. 18 2025, https://www.brightsurf.com/news/8X5Q4PP1/new-ai-framework-aims-to-remove-bias-in-key-areas-such-as-health-education-and-recruitment.html.