联邦学习算法 – Model projection based Federated Learning for Non-IID data (Under review)
This is the PyTorch implementation of our paper “Jie Du, Wei Li, Peng Liu, et al. Model projection based Federated Learning for Non-IID data” .
The experimental results (test accuracy %) on HAM10K dataset.
Methods | Two-client | Four-client | |
---|---|---|---|
Centralized learning | 68.94±0.06 | ||
Federated learning | FedAvg(AISat-2017) | 65.43±0.29 | 59.22±0.77 |
FedProx(PMLS-2020) | 66.25±0.29 | 59.22±0.29 | |
SCAFFOLD(ICML-2020) | 67.08±1.34 | 64.39±0.29 | |
FedaGrac(TPDS-2023) | 67.70±0.88 | 64.18±0.59 | |
FedReg(ICLR-2022) | 66.67±1.05 | 65.84±0.05 | |
Our FedMoP | 69.36±0.30 (↑ 1.66) | 69.57±0.51 (↑ 3.73) |
The experimental results (test accuracy %) on COVID-19 and PBC dataset.
Methods | Non-uniform | One-class | |||
---|---|---|---|---|---|
COVID-19 | PBC | COVID-19 | PBC | ||
Centralized learning | 68.60±0.05 | 97.45±0.03 | 68.60±0.05 | 97.45±0.03 | |
Federated learning | FedAvg(AISat-2017) | 66.31±0.24 | 96.74±0.06 | 67.05±0.16 | 87.22±0.29 |
FedProx(PMLS-2020) | 66.41±0.32 | 96.78±0.10 | 66.89±0.32 | 87.88±0.15 | |
SCAFFOLD(ICML-2020) | 68.44±0.12 | 96.83±0.02 | 66.76±0.65 | 69.79±0.51 | |
FedaGrac(TPDS-2023) | 68.60±0.49 | 96.98±0.05 | 68.18±0.25 | 88.80±0.16 | |
FedReg(ICLR-2022) | 68.09±1.22 | 96.81±0.04 | 69.66±0.62 | 91.93±0.36 | |
Our FedMoP | 70.30±0.30 (↑1.70) | 97.48±0.08 (↑0.50) | 70.88±0.40 (↑1.22) | 97.24±0.07 (↑5.31) |
Usage
requirements:
- Ubuntu Server == 20.04.4 LTS
- CUDA == 11.6
- numpy ==1.23.1
- Pillow == 9.2.0
- python == 3.8.0
- quadprog == 0.1.11
- torch == 1.12.0
- torchvision == 0.13.0
Datasets:
- HAM10k:Please manually download the HAM10000 dataset from the official website, unzip it and place it in ‘./dataset/ham10k’.
- COVID-19:Please manually download the COVID-19 Five-classes dataset from the official website, unzip it and place it in ‘./dataset/covid19’.
- PBC: Please manually download the PBC dataset from the official website, unzip it and place it in ‘./dataset/pbc’.
Split the dataset in feature/label distribution skew:
You can use split_dataset.py
to split the dataset into feature distribution skew and label distribution skew, and then save the split data set for model training.
Training:
main.py
is the main file to run the federated experiments.
The experiments can be run by:
cd FedMoP
python main.py
Website: FedMop.
More detail, please ref Model projection based Federated Learning for Non-IID data or Model projection based Federated Learning for Non-IID data