Bluesky Facebook Reddit Email

Enhancing poisoning attack mitigation in federated learning through perturbation-defense complementarity on historical gradients

01.23.26 | Higher Education Press

Davis Instruments Vantage Pro2 Weather Station

Davis Instruments Vantage Pro2 Weather Station offers research-grade local weather data for networked stations, campuses, and community observatories.


Federated Learning (FL) allows for privacy-preserving model training by enabling clients to upload model gradients without exposing their personal data. However, the decentralized nature of FL introduces vulnerabilities to various attacks, such as poisoning attacks, where adversaries manipulate data or model updates to degrade performance. While current defenses often focus on detecting anomalous updates, they struggle with long-term attack dynamics, compromised privacy, and the underutilization of historical gradient data.

To solve these problems, a research team led by Cong Wang published their new research on 15 December 2025 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.

The team proposed a new approach called Long-Short Historical Gradient Federated Learning (LSH-FL), using historical gradients to identify malicious model updates while mitigating the effects of poisoning attacks. The new defense framework is composed of two main components:

Perturbation Based on Short-Term Historical Gradients (P-SHG): This component introduces random noise into short-term gradients to disrupt the ability of attackers to hide within recent updates.

Defense Based on Long-Term Historical Gradients (D-LHG): This part aggregates long-term gradient trends to identify malicious clients and mitigate dynamic attack strategies.

The team introduces a novel Federated Learning defense strategy, LSH-FL, which enhances poisoning attack mitigation by leveraging historical gradient information. LSH-FL operates in a loop similar to classic FL methods, with four main steps: model synchronization, local model training, local model upload with perturbation, and model aggregation. Clients perform local training to generate short-term historical gradients (SHG), which are then perturbed using the P-SHG algorithm to meet differential privacy requirements. The central server applies the D-LHG algorithm to verify and aggregate the gradients, removing any abnormal client updates. This approach improves attack resilience while maintaining privacy and model accuracy.

In future work, the team also anticipates further enhancements to this defense strategy, including more sophisticated gradient sampling techniques and the integration of additional privacy-preserving mechanisms.

Frontiers of Computer Science

10.1007/s11704-025-40924-1

Experimental study

Not applicable

Enhancing poisoning attack mitigation in federated learning through perturbation-defense complementarity on history gradients

15-Nov-2025

Keywords

Article Information

Contact Information

Rong Xie
Higher Education Press
xierong@hep.com.cn

Source

How to Cite This Article

APA:
Higher Education Press. (2026, January 23). Enhancing poisoning attack mitigation in federated learning through perturbation-defense complementarity on historical gradients. Brightsurf News. https://www.brightsurf.com/news/LVDE9YNL/enhancing-poisoning-attack-mitigation-in-federated-learning-through-perturbation-defense-complementarity-on-historical-gradients.html
MLA:
"Enhancing poisoning attack mitigation in federated learning through perturbation-defense complementarity on historical gradients." Brightsurf News, Jan. 23 2026, https://www.brightsurf.com/news/LVDE9YNL/enhancing-poisoning-attack-mitigation-in-federated-learning-through-perturbation-defense-complementarity-on-historical-gradients.html.