Bluesky Facebook Reddit Email

New defense strategy for federated learning, capping accuracy loss at 0.47%

07.24.25 | Higher Education Press

SAMSUNG T9 Portable SSD 2TB

SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.

Researchers at Beihang University, in collaboration with the Beijing Zhongguancun Laboratory, have developed a new defense strategy called Long-Short Historical Gradient Federated Learning (LSH-FL), which maintains accuracy losses from attacks below 1% on key benchmarks. This approach directly tackles the risk of malicious clients hijacking decentralized model training.

New Defense Shields Federated Learning from Poisoning Threats in Healthcare, Autonomous Vehicles & Finance

Federated learning enables devices—such as smartphones or medical sensors—to train models collaboratively without sharing raw data; however, it is vulnerable to “poisoning” attacks that send malicious updates to the server. Such attacks pose a threat to applications in healthcare diagnostics, autonomous vehicles, and finance. By making federated learning more robust, LSH-FL can help ensure safer, more trustworthy AI for both industry and consumers.

New Model Caps MNIST Accuracy Drop at 0.47% and Keeps CIFAR-10 Loss Under 4% Even with 50% Attackers

The experiments produced the following clear outcomes:

Novel Two-Pronged Approach Uses Randomized Tweaks and Gradient History to Sniff Out Malicious Updates

To develop LSH-FL, the researchers combined two complementary strategies that work together to defend against poisoning while preserving privacy. First, they introduced short-term perturbations by adding minor, randomized adjustments to each client’s latest model updates; this makes it difficult for attackers to blend malicious changes with legitimate contributions. Second, they implemented long-term detection by maintaining a lightweight history of past updates and identifying patterns that deviate from normal behavior, allowing the system to flag and discard suspicious inputs. All experiments were conducted in standard federated learning environments, without accessing raw data, and tested under realistic network conditions to ensure the approach remains practical and efficient.

“By combining short-term perturbations with long-term gradient history, we’ve found a practical way to keep federated learning both accurate and secure—even when half the participants turn malicious,” said Prof. Zhilong Mi.

Potential Solution Delivers <1% Accuracy Loss and Privacy-Preserving Security for Distributed AI

LSH-FL provides a practical, low-overhead approach to hardening federated learning against malicious participants without compromising accuracy or privacy. As industries increasingly rely on decentralized AI, this approach could become a key component in deploying safe and reliable distributed learning systems. The full research article was published in Frontiers of Computer Science in May 2025 ( https://doi.org/10.1007/s11704-025-40924-1 ).

Frontiers of Computer Science

10.1007/s11704-025-40924-1

Experimental study

Not applicable

Enhancing poisoning attack mitigation in federated learning through perturbation-defense complementarity on history gradients

28-Apr-2025

Keywords

Article Information

Contact Information

Rong Xie
Higher Education Press
xierong@hep.com.cn

Source

How to Cite This Article

APA:
Higher Education Press. (2025, July 24). New defense strategy for federated learning, capping accuracy loss at 0.47%. Brightsurf News. https://www.brightsurf.com/news/1WRPWGDL/new-defense-strategy-for-federated-learning-capping-accuracy-loss-at-047.html
MLA:
"New defense strategy for federated learning, capping accuracy loss at 0.47%." Brightsurf News, Jul. 24 2025, https://www.brightsurf.com/news/1WRPWGDL/new-defense-strategy-for-federated-learning-capping-accuracy-loss-at-047.html.