Paper:Class-incremental learning: survey and performance evaluation
- Abstract : 对现有12增量学习进行广泛评估;
- Introduction: 早期的工作都是任务增量学习(task-incremental learning (task-IL)); 近期工作多为类增量学习 class-incremental learning (class- IL); 区别:在推理时是否可以访问任务ID;
2.1 regularization-based solutions: 尽量减少学习新任务对前期任务的重要权重的影响; Exemplar-based:储存有限的范例; task-recency bias - Related Work:
3.1 Task-incremental learning: Mask-based methods :通过对每个参数或者每层表征应用掩码来消除遗忘; Piggyback:学习mask在网络权重上; PackNet:学习权重然后prunes; PathNET使用进化策略通过权值选择路由;
3.2 Online: Online methods are based on streaming frameworks where learners are allowed to observe each ex- ample only once instead of iterating over a set of examples in a training session.
3.3 Variational continual learning: Variational continual learning is based on the Bayesian inference framework
3.4 Pseudo-rehearsal methods. In order to avoid storing ex- emplars and privacy issues inherent in exemplar rehearsal,some methods learn to generate examples from previous tasks(生成范例) - CLASS-INCREMENTAL LEARNING
4.1 Challenges of class-incremental learning
4.1.1 权重漂移,Activation drift,Inter-task confusion,Task-recency bias - Approaches:
5.1.1 基于Weight regularization
RWARK(EWC, PATHIT), MAS
5.1.2 基于 DATA regularization
LWF
5.1.3 Rehearsal approaches
(1) Memory Types
(2) Sampling strategies
(3) Task balancing
(4) Combining
5.1.4 Bias-correction approaches
EEIL(Balanced training) BIC(bias correction) LUCIR(cos,margin rank ),IL2M(rectifies based on the saved certainty statistics of predictions of classes from previous tasks) - relations
7 experimental SETUP
7.1 DATASET
7.2 metrics
Average accuracy, forgetting, intransigence,confusion matrix
7.3 Experiment