Federated Learning (FL) allows multiple clients to collaboratively train a global model without sharing their raw data. While this decentralized setup is appealing for privacy-sensitive domains like healthcare and finance, it is not inherently secure: model updates can leak private information, and malicious clients can corrupt training.
To tackle these challenges, two main strategies are used: Secure Aggregation, which protects privacy by hiding individual updates, and Robust Aggregation, which filters out malicious updates. However, these goals can conflict—privacy mechanisms may obscure signs of malicious behavior, and robustness methods may violate privacy.
Moreover, most research focuses on model-level attacks, neglecting protocol-level threats like message delays or dropped updates, which are common in real-world, asynchronous networks.
This thesis aims to explore the privacy–robustness trade-off in FL, identify feasible security models, and design practical, secure, and robust protocols. Both theoretical analysis and prototype implementation will be conducted, leveraging tools like Secure Multi-Party Computation, cryptographic techniques, and differential privacy.