key point:how to deal with data not directly related to the task considered?
two different type:
- different domain different task
- different domain same task
Definition: source data - not related to
targeted data - related(but the number of which is usually small)\
so how to deal with it?
idea: 1.train model with source data(similar to parameter initialization)
2.fine-tune the model by target data
problem: limited data lead to overfitting
→how?
- add regularization to make the two model not too different
-layer transfer
Question: no obvious improvement?
multitask learning
(compared to Fine-tuning) it’s focus both on target and source domain performance
fancy model
Zero-shot learning
key point: how can a model recognize the image that it never seen?
solution:
build the connection between image and attributes and between attributes and class(which means attributes like a bridge)
Domain-adversarial training
key point: 通过feature extractor 分析发现 两类data convolution之后得到的是两个群落,因此再共用同一个label predictor不能得到很好的效果,所以solution:将convolutions之后的信息变为一个群落,由此引出domain classifier。domain classifier的目的是分辨出两个图片是否来自同一个data base 因此main point为 减小label predictor的Loss 和增加domain classifier的loss.