Skip to content

Latest commit

 

History

History
179 lines (127 loc) · 11.5 KB

File metadata and controls

179 lines (127 loc) · 11.5 KB

Differentially Private FL

This repository collects related papers and corresponding codes on DP-based FL.

Code

Tip: the code of this repository is my personal implementation, if there is an inaccurate place please contact me, welcome to discuss with each other. The FL code of this repository is based on this repository .I hope you like it and support it. Welcome to submit PR to improve the repository.

Note that in order to ensure that each client is selected a fixed number of times (to compute privacy budget each time the client is selected), this code uses round-robin client selection, which means that each client is selected sequentially.

Important note: The number of FL local update rounds used in this code is all 1, please do not change, once the number of local iteration rounds is changed, the sensitivity in DP needs to be recalculated, the upper bound of sensitivity will be a large value, and the privacy budget consumed in each round will become a lot, so please use the parameter setting of Local epoch = 1.

Parameter List

Datasets: MNIST, Cifar-10, FEMNIST, Fashion-MNIST, Shakespeare.

Model: CNN, MLP, LSTM for Shakespeare

DP Mechanism: Laplace, Gaussian(Simple Composition), Gaussian(moments accountant)

DP Parameter: $\epsilon$ and $\delta$

DP Clip: In DP-based FL, we usually clip the gradients in training and the clip is an important parameter to calculate the sensitivity.

Example Results

Experiments code:

pip3 install -r requirements.txt
bash run.sh

Drawing code:

python3 draw.py

Gaussian (Simple Composition)

Mnist

Gaussian (Moment Account)

Mnist

Laplace

Mnist

No DP

python main.py --dataset mnist --model cnn --dp_mechanism no_dp

Gaussian Mechanism

Simple Composition

Based on Simple Composition in DP.

In other words, if a client's privacy budget is $\epsilon$ and the client is selected $T$ times, the client's budget for each noising is $\epsilon / T$.

python main.py --dataset mnist --model cnn --dp_mechanism Gaussian --dp_epsilon 10 --dp_delta 1e-5 --dp_clip 10

Moments Accountant

We use Tensorflow Privacy to calculate noise scale of the Moment Account(MA) for Gaussian Mechanism.

python main.py --dataset mnist --model cnn --dp_mechanism MA --dp_epsilon 10 --dp_delta 1e-5 --dp_clip 10 --dp_sample 0.01

See the paper for detailed mechanism.

Abadi, Martin, et al. "Deep learning with differential privacy." Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 2016.

Laplace Mechanism

Based on Simple Composition in DP.

python main.py --dataset mnist --model cnn --dp_mechanism Laplace --dp_epsilon 30 --dp_clip 50

Papers

Remark

The new version uses Opacus for Per Sample Gradient Clip, which limits the norm of the gradient calculated by each sample.

This code sets the number of local training rounds to 1, and the batch size is the local data set size of the client. Since the training of the Opacus library will save the gradient of all samples, the gpu memory usage is very large during training. This problem can be solved by specifying --serial and --serial_bs parameters.

These two parameters will physically specify a virtual batch size, and the corresponding training time will be longer, but logically will not affect the training and the addition of DP noise. The main reason for this is to not violate the theory of DP noise addition.

The Dev branch is still being improved, and new DPFL algorithms including MA, F-DP, and Shuffle are implemented in it. Interested friends are welcome to give valuable advice!

Citation

Consider citing following papers:

[1] W. Yang et al., "Gain Without Pain: Offsetting DP-Injected Noises Stealthily in Cross-Device Federated Learning," in IEEE Internet of Things Journal, vol. 9, no. 22, pp. 22147-22157, 15 Nov. 15, 2022, doi: 10.1109/JIOT.2021.3102030

 @ARTICLE{9504601,
  author={Yang, Wenzhuo and Zhou, Yipeng and Hu, Miao and Wu, Di and Zheng, Xi and Wang, Jessie Hui and Guo, Song and Li, Chao},
  journal={IEEE Internet of Things Journal}, 
  title={Gain Without Pain: Offsetting DP-Injected Noises Stealthily in Cross-Device Federated Learning}, 
  year={2022},
  volume={9},
  number={22},
  pages={22147-22157},
  keywords={Privacy;Internet of Things;Training;Distortion;Computational modeling;Machine learning;Differential privacy;Differential privacy (DP);federated learning (FL);secretly offsetting},
  doi={10.1109/JIOT.2021.3102030}}

[2] M. Hu et al., "AutoFL: A Bayesian Game Approach for Autonomous Client Participation in Federated Edge Learning," in IEEE Transactions on Mobile Computing, doi: 10.1109/TMC.2022.3227014

@ARTICLE{9971780,
  author={Hu, Miao and Yang, Wenzhuo and Luo, Zhenxiao and Liu, Xuezheng and Zhou, Yipeng and Chen, Xu and Wu, Di},
  journal={IEEE Transactions on Mobile Computing}, 
  title={AutoFL: A Bayesian Game Approach for Autonomous Client Participation in Federated Edge Learning}, 
  year={2024},
  volume={23},
  number={1},
  pages={194-208},
  keywords={Training;Games;Costs;Computational modeling;Servers;Bayes methods;Task analysis;Bayesian game;client participation;federated edge learning;incomplete information},
  doi={10.1109/TMC.2022.3227014}}

[3] Y. Zhou et al., "Optimizing the Numbers of Queries and Replies in Convex Federated Learning with Differential Privacy," in IEEE Transactions on Dependable and Secure Computing, vol. 20, no. 6, pp. 4823-4837, Nov. 1, 2023, doi: 10.1109/TDSC.2023.3234599

[4] Y. Zhou, et al., "Exploring the Practicality of Differentially Private Federated Learning: A Local Iteration Tuning Approach" in IEEE Transactions on Dependable and Secure Computing, doi: 10.1109/TDSC.2023.3325889

[5] Y. Yang, M. Hu, Y. Zhou, X. Liu and D. Wu, "CSRA: Robust Incentive Mechanism Design for Differentially Private Federated Learning," in IEEE Transactions on Information Forensics and Security, doi: 10.1109/TIFS.2023.3329441