Skip to content
/ FedKTL Public

CVPR 2024 accepted paper, An Upload-Efficient Scheme for Transferring Knowledge From a Server-Side Pre-trained Generator to Clients in Heterogeneous Federated Learning

License

Notifications You must be signed in to change notification settings

TsingZ0/FedKTL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

This is the implementation of our paper An Upload-Efficient Scheme for Transferring Knowledge From a Server-Side Pre-trained Generator to Clients in Heterogeneous Federated Learning (accepted by CVPR 2024).

Key words: pre-trained generative model, knowledge transfer, federated learning, data heterogeneity, model heterogeneity

  • Poster
  • Slides From another perspective: Generative Model-Assisted Collaborative Learning

Take away: We introduce FedKTL, a Federated Knowledge Transfer Loop (KTL) that (1) transfers common knowledge from a server-side pre-trained generator to client small models, regardless of the generator's pre-training datasets, and (2) shares task-specific knowledge among clients through federated learning.

An example of our FedKTL for a 3-class classification task. Rounded and slender rectangles denote models and representations, respectively; dash-dotted and solid borders denote updating and frozen components, respectively; the segmented circle represents the ETF classifier.

Citation

@inproceedings{zhang2024upload,
  title={An Upload-Efficient Scheme for Transferring Knowledge From a Server-Side Pre-trained Generator to Clients in Heterogeneous Federated Learning},
  author={Zhang, Jianqing and Liu, Yang and Hua, Yang and Cao, Jian},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}

Datasets and Environments

Due to the file size limitation, we only upload the statistics (config.json) of the Cifar10 dataset in the practical setting ($\beta=0.1$). Please refer to our popular repository PFLlib and HtFLlib to generate all the datasets and create the required python environment.

System

  • main.py: System configurations.
  • total.sh: Command lines to run experiments for FedKTL with default hyperparameter settings.
  • flcore/:
    • clients/: The code on clients. See HtFLlib for baselines.
    • servers/: The code on servers. See HtFLlib for baselines.
      • serverktl_stable_diffusion.py: the code for using the pre-trained Stable Diffusion on the server.
      • serverktl_stylegan_3.py: The code for using the pre-trained StyleGAN3 on the server.
      • serverktl_stylegan_xl.py: The code for using the pre-trained StyleGAN-XL on the server.
    • trainmodel/: The code for some heterogeneous client models.
  • stable-diffusion/ (Other text-to-image models are also supported):
    • pipelines/: The customized pipeline enables the independent operation of the Latent Diffusion Model from other components.
    • v1.5/: The folder to store the pre-trained Stable Diffusion v1.5. Large model files are not included here due to limited space. Please download primary safetensors files into sub-folders from the Hugging Face link. For further instructions on running Stable Diffusion, please see the documentation of the diffusers package.
  • stylegan/:
  • utils/:
    • data_utils.py: The code to read the dataset.
    • mem_utils.py: The code to record memory usage.
    • result_utils.py: The code to save results to files.

Training and Evaluation

All codes are stored in ./system. Just run the following commands.

cd ./system
sh run_me.sh

About

CVPR 2024 accepted paper, An Upload-Efficient Scheme for Transferring Knowledge From a Server-Side Pre-trained Generator to Clients in Heterogeneous Federated Learning

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published