1.一开始是FaceNet
2.一个重要的改进:image-based, Ding etal.
3.对于样本挑选的改进:
1)hard samples: hard positive 和hard negative (In Defense of Triplet Loss for person Re-Identification)
2) hard negative (cvpr2016,车辆检索)
3) minize the distance between those samples with the same id (cvpr 2016, person re-identification by multi-channel parts-based cnn with improved triplet loss function)
4) multiple negative (nips 2016)
5) 四元组,cvpr 2017: Beyond triplet loss: a deep quadruplet network for person re-identification
6) 五元组,cvpr 2016
7) soft-margin (In Defense of Triplet Loss for person Re-Identification)
8) Litfed Embedding Loss and the improved version. (In Defense of Triplet Loss for person Re-Identification, Deep Metric Learning via Lifted Structured Feature Embedding)
9) 认为anchor和positive地位同等,原来的标准triplet把negive推向远离anchor,没有保证也远离positive。因此,考虑向量的运算。Deep Metric Learning with Improved Triplet Loss for Face Clustering in video.