Federated learning non-iid

1.Fine-tuning global model via data-free knowledge distillation for non iid federated learning

challenge: simple model aggregation ignores local knowledge incompatibility and induces knowledge forgetting in the global model.
Aggregation without finetuning will degrade the performance.

2.Federated evaluation and tuning for on-device personlization :system design and application

From system perspective, the system can be used in commercial applications.

Ground truth generation is hard in specific scenarios, like ASR.

3.Semi-supervised learning based on generative adversarial network: a
comparison between good GAN and bad GAN approach
GAN can be applied to Semi-supervised learning.
However, many works have not adopted GAN in SSL recently.
There may be some issues in GAN.

self-supervised is a good direction to solve the FSSL problem.

4.Rethinking the Value of Labels for Improving Class-Imbalanced Learning

We identify a persisting dilemma on the value of labels in the context of imbalanced learning: on the one hand, supervision from labels typically leads to better results than its unsupervised counterparts; on the other hand, heavily imbalanced data naturally incurs “label bias” in the classifier, where the decision boundary can be drastically altered by the majority classes.

Imbalanced Learning with Unlabeled Data
在这里插入图片描述
With the help of unlabeled data, we show that while all
classes can obtain certain improvements, the minority classes tend to exhibit larger gains.

5.Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning

Is it possible to defend against data heterogeneity in FL
systems by sharing data containing no private information?
The key challenge of VHL is how to generate the virtual dataset to benefit model performance. (DA) utilizes domain adaption to mitigate the distribution shift.

6.Spherical Space Domain Adaptation with Robust Pseudo-label Loss

7.Rethinking Pseudo Labels for Semi-supervised Object Detection

8.RETHINKING DATA AUGMENTATION:
SELF-SUPERVISION AND SELF-DISTILLATION

9.Rethinking Re-Sampling in Imbalanced Semi-Supervised Learning

10.ABC: Auxiliary Balanced Classifier for
Class-Imbalanced Semi-Supervised Learning

11.Disentangling Label Distribution for Long-tailed Visual Recognition

12.CReST: A Class-Rebalancing Self-Training Framework
for Imbalanced Semi-Supervised Learning

13.Rethinking Pre-training and Self-training

three similar paper for generate pseudo feature to train federated models.
(1) No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data
(2) Fine-tuning global model via data-free knowledge distillation for non-iid federated learning
(3) Data-Free Knowledge Distillation for Heterogeneous Federated Learning.

Similarity:
1.three methods use the virtual representation to
2. targets of three methods are to solve the non iid problem.
3. (1)(2) have the training process in server. (1) is to retrain classifiers. (2) is to aggregate the local model.

difference:
1 . (2)(3) utilize the GAN to generate pseudo samples and the generators are updated in servers. But, (2) adopts pseudo samples to aggregate the local models and use hard sampling, label sampling and class level ensemble to optimize the pseudo sampling steps. (3) uses pseudo samples to regularize the local training steps and still use fedavg to aggregate local models. (1) ues mixture guassian distribution to generate virtual samples to retrain the classifier to debias.
2 . (3) share the prediction layer, which is different with previous works. Many works share the feature extraction layer and localize the prediction layer. The purpose is to reduce communication workload.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值