Security and Privacy in Federated Learning with Non-IID Data: Knowledge Deception, Concealment, and Multi-Perspective Defenses
| Metadata Field | Value | Language |
|---|---|---|
| dc.contributor.advisor | Shu, Tao | |
| dc.contributor.author | Xu, Hairuo | |
| dc.date.accessioned | 2025-11-25T15:28:09Z | |
| dc.date.available | 2025-11-25T15:28:09Z | |
| dc.date.issued | 2025-11-25 | |
| dc.identifier.uri | https://etd.auburn.edu/handle/10415/10063 | |
| dc.description.abstract | Federated learning (FL) enables collaborative model training across decentralized data silos without direct data sharing, providing significant privacy advantages. However, its distributed nature also makes it vulnerable to model poisoning attacks and privacy leakage. Existing defense and privacy mechanisms often rely on restrictive assumptions about attack models or data distributions, limiting their generalizability and effectiveness. A more comprehensive and adaptive approach is needed to ensure the robustness, trustworthiness, and privacy of FL systems. This dissertation addresses these challenges from three complementary perspectives: security, privacy, and adversarial robustness. The first work introduces MinVar, an optimization-based robust aggregation algorithm that dynamically adjusts client update weights to mitigate malicious contributions while maintaining model performance under non-I.I.D. data and colluding attackers. The second work develops an attack-model-agnostic defense framework that provides robust protection against diverse and unseen poisoning strategies without rely- ing on predefined attack assumptions, thereby addressing the critical zero-day vulnerability in FL. The third work, Hide-and-Seek, presents a privacy-preserving data sharing framework that enables dataset providers to control the learnability and privacy of released data, offering a practical alternative to all-or-nothing sharing. Finally, the Knowledge Deception Attack (KDA) introduces a reinforcement learning–guided adversarial strategy that manipulates the misclassification distribution of a target class, revealing a new and fine-grained vulnerability in FL systems. Collectively, these studies advance the understanding of secure, private, and trustworthy federated learning and provide practical insights for designing resilient and adaptive distributed AI systems. | en_US |
| dc.rights | EMBARGO_NOT_AUBURN | en_US |
| dc.subject | Computer Science and Software Engineering | en_US |
| dc.title | Security and Privacy in Federated Learning with Non-IID Data: Knowledge Deception, Concealment, and Multi-Perspective Defenses | en_US |
| dc.type | PhD Dissertation | en_US |
| dc.embargo.length | MONTHS_WITHHELD:12 | en_US |
| dc.embargo.status | EMBARGOED | en_US |
| dc.embargo.enddate | 2026-11-25 | en_US |
