Please use this identifier to cite or link to this item:
https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4913
Title: | Balancing Privacy and Performance in Federated Learning with Adaptive Differential Privacy and Secure Multi Party Computation |
Authors: | Attygalle, T.D |
Issue Date: | 24-Jun-2025 |
Abstract: | Abstract Federated Learning (FL) enables multiple clients to collaboratively train deep models without sharing their raw data, yet model updates can still leak sensitive information through inference attacks. Differential Privacy (DP) offers formal privacy guarantees by perturbing updates with noise, but naively adding noise at each client often degrades model utility and rapidly consumes the privacy budget. Adaptive DP‑FL developed by Fu et al. (2022) mitigates this by using per‑client adaptive clipping and validation‑driven noise decay, but still relies on each client injecting noise locally, increasing variance and requiring trust in the aggregation server. Secure Multi‑Party Computation (SMPC) via Shamir’s secret sharing ensures that individual updates remain hidden during aggregation, but has not been combined with adaptive DP in FL. In this research, we propose a novel hybrid FL framework that integrates adaptive DP‑FL with SMPC. Each client adaptively clips its gradients and secret‑shares them across multiple non‑colluding aggregation servers. The servers securely aggregate the shares, reconstruct only the global sum, and inject a single, globally calibrated Gaussian noise term to satisfy (ε ,δ )‑DP under Rényi DP accounting of Mironov (2017). This design reduces the effective sensitivity by a factor of 1/K (with K clients), lowers total noise variance by O(K2), and simplifies privacy bookkeeping. We implement both the baseline adaptive DP‑FL and our hybrid scheme in PyTorch using Opacus, and evaluate them on MNIST and Fashion‑MNIST under privacy budgets ε = 0.5 and ε = 0.3. The hybrid method achieves up to 96.45% and 94.32% test accuracy on MNIST versus 95.89% and 94.27% for the baseline, and 79.27% and 78.42% on Fashion‑MNIST versus 79.00% and 76.21%, respectively. Accuracy and loss curves against ε demonstrate tighter privacy‑utility trade‑offs. Our results show that combining adaptive DP with SMPC offers a practical path to stronger privacy guarantees and improved model performance, making FL more viable for sensitive domains such as healthcare and finance. |
URI: | https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4913 |
Appears in Collections: | 2025 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
20000172-Attygalle TD - Tharindu.pdf | 824.17 kB | Adobe PDF | View/Open |
Items in UCSC Digital Library are protected by copyright, with all rights reserved, unless otherwise indicated.