This Is Auburn

Show simple item record

Security and Privacy in Federated Learning with Non-IID Data: Knowledge Deception, Concealment, and Multi-Perspective Defenses


Metadata FieldValueLanguage
dc.contributor.advisorShu, Tao
dc.contributor.authorXu, Hairuo
dc.date.accessioned2025-11-25T15:28:09Z
dc.date.available2025-11-25T15:28:09Z
dc.date.issued2025-11-25
dc.identifier.urihttps://etd.auburn.edu/handle/10415/10063
dc.description.abstractFederated learning (FL) enables collaborative model training across decentralized data silos without direct data sharing, providing significant privacy advantages. However, its distributed nature also makes it vulnerable to model poisoning attacks and privacy leakage. Existing defense and privacy mechanisms often rely on restrictive assumptions about attack models or data distributions, limiting their generalizability and effectiveness. A more comprehensive and adaptive approach is needed to ensure the robustness, trustworthiness, and privacy of FL systems. This dissertation addresses these challenges from three complementary perspectives: security, privacy, and adversarial robustness. The first work introduces MinVar, an optimization-based robust aggregation algorithm that dynamically adjusts client update weights to mitigate malicious contributions while maintaining model performance under non-I.I.D. data and colluding attackers. The second work develops an attack-model-agnostic defense framework that provides robust protection against diverse and unseen poisoning strategies without rely- ing on predefined attack assumptions, thereby addressing the critical zero-day vulnerability in FL. The third work, Hide-and-Seek, presents a privacy-preserving data sharing framework that enables dataset providers to control the learnability and privacy of released data, offering a practical alternative to all-or-nothing sharing. Finally, the Knowledge Deception Attack (KDA) introduces a reinforcement learning–guided adversarial strategy that manipulates the misclassification distribution of a target class, revealing a new and fine-grained vulnerability in FL systems. Collectively, these studies advance the understanding of secure, private, and trustworthy federated learning and provide practical insights for designing resilient and adaptive distributed AI systems.en_US
dc.rightsEMBARGO_NOT_AUBURNen_US
dc.subjectComputer Science and Software Engineeringen_US
dc.titleSecurity and Privacy in Federated Learning with Non-IID Data: Knowledge Deception, Concealment, and Multi-Perspective Defensesen_US
dc.typePhD Dissertationen_US
dc.embargo.lengthMONTHS_WITHHELD:12en_US
dc.embargo.statusEMBARGOEDen_US
dc.embargo.enddate2026-11-25en_US

Files in this item

Show simple item record