Solutions for Domain Shift
self-supervised transfer learning strategy
E.g.1. ---gist: leveraging a confidence model to automatically generate a set of reliable training samples (pseudo-labels) for a target domain.
1. train a model using training dataset (source domain)
2. utilize the result of trained model (input) and the error map between the result of trained model and the ground truth (label) to train a confidence map model
3.test the trained model using test dataset (target domain) to obtain the predicted result and the predicted confidence map, which can be selected top K condifence (ranked by average confidence values) as samples to train the new model (SSL model---training in the target domain) using a confidence map-weighted CE loss
4. iterate N steps to refine the SSL model
cycle-consistency learning strategy
E.g.1.
1. train a model using training dataset (source domain)
2. train a CycleGAN that harmonizes the conversion differences between source domain (S) and target domain (T)
3. using the trained and harmonized T->S generator to convert target domain into source domain, then leverage the trained model (source domain) to predict the result to alleviate the domain shift.
Solutions for Task Shift
self-supervised transfer learning strategy
Self-supervised learning (SSL) can effectively learn feature representations fromunlabeled data by pre-training.
1. using the structural characteristic of the only known data to create pseudo-labels.
2. pre-train on some proxy tasks such as masked inpainting (masked image modeling), contrastive learning (e.g. barlow twins), rotation and so on to learn context representation.
3. transfer their knowledge into the target task (downstream task).