文章目录
Deep learning based person re-id
First deep learning method in re-id
- [4] ‘‘DeepReID: Deep filter pairing neural network for person re-identification,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 152–159.
Cross neighborhood difference
- [5] ‘‘An improved deep learning architecture for person re-identification,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 3908–3916
- [6] L. Wu, C. Shen, and A. van den Hengel. (2016). ‘‘PersonNet: Person re- identification with deep convolutional neural networks.’’ [Online]. Available: https://arxiv.org/abs/1601.07255
Domain guided dropout
- [33] T.Xiao, H.Li,W.Ouyang,and X.Wang,‘‘Learning deep feature representations with domain guided dropout for person re-identification,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 1249–1258.
Spatial and temporal features-video
- [34] N.McLaughlin,J.M.delRincon,andP.Miller,‘‘Recurrent convolutional network for video-based person re-identification,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 1325–1334.
GAN produce unlabeled samples
- [35] Z.Zheng,L.Zheng,andY.Yang.(2017).‘‘Unlabeled samples generated by gan improve the person re-identification baseline in vitro.’’ [Online]. Available: https://arxiv.org/abs/1701.07717
Multi-scale to explore pedestrian clothes
- [36] X. Qian, Y. Fu, Y.-G. Jiang, T. Xiang, and X. Xue. (2017). ‘‘Multi- scale deep learning architectures for person re-identification.’’ [Online]. Available: https://arxiv.org/abs/1709.05165
human body region guided network
- [37] H. Zhao et al., ‘‘Spindle net: Person re-identification with human body region guided feature decomposition and fusion,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jul. 2017, pp. 1077–1085.
Pose-sensitive embedding
- [38] M. S. Sarfraz, A. Schumann, A. Eberle, and R. Stiefelhagen, ‘‘A pose- sensitive embedding for person re-identification with expanded cross neighborhood re-ranking,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Mar. 2018, pp. 420–429.
Attention-based mechanisms
-
[39] S. Li, S. Bak, P. Carr, and X. Wang, ‘‘Diversity regularized spatiotemporal attention for video-based person re-identification,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Mar. 2018, pp. 369–378.
-
[40] D. Chen, H. Li, T. Xiao, S. Yi, and X. Wang, ‘‘Video person re- identification with competitive snippet-similarity aggregation and co- attentive snippet embedding,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 1169–1178.
-
[41] J. Xu, R. Zhao, F. Zhu, H. Wang, and W. Ouyang, ‘‘Attention-aware compositional network for person re-identification,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 2119–2128.
-
[42] C. Song, Y. Huang, W. Ouyang, and L. Wang, ‘‘Mask-guided contrastive attention model for person re-identification,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 1179–1188.
Multi-level features to match person id
- [43] Y. Guo and N.-M. Cheung. (2018). ‘‘Efficient and deep person re-identification using multi-level similarity.’’ [Online]. Available: https://arxiv.org/abs/1803.11353
Kronecker-Product
*[44] Y. Shen, T. Xiao, H. Li, S. Yi, and X. Wang, ‘‘End-to-end deep kronecker- product matching for person re-identification,’’ in Proc. IEEE Conf. Com- put. Vis. Pattern Recognit., Jun. 2018, pp. 6886–6895.
Siamese network
-
[48] D. Yi, Z. Lei, and S. Z. Li. (2014). ‘‘Deep metric learning for practical per-son re-identification.’’ [Online]. Available: https://arxiv.org/abs/1407.4979
-
[49] R. R. Varior, M. Haloi, and G. Wang, ‘‘Gated siamese convolutional neural network architecture for human re-identification,’’ in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2016, pp. 791–808.
-
[50] D. Chung, K. Tahboub, and E. J. Delp, ‘‘A two stream siamese convolu-tional neural network for person re-identification,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Oct. 2017, pp. 1983–1991.
-
[51] Y. Wang, Z. Chen, F. Wu, and G. Wang, ‘‘Person re-identification with cascaded pairwise convolutions,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 1470–1478.
-
[52] M. Geng, Y. Wang, T. Xiang, and Y. Tian. (2016). ‘‘Deep transfer learning for person re-identification.’’ [Online]. Available: https://arxiv. org/abs/1611.05244
Triplet network
- [53] S.Ding,L.Lin,G.Wang,andH.Chao,‘‘Deep feature learning with relative distance comparison for person re-identification,’’ Pattern Recognit.,
vol. 48, no. 10, pp. 2993–3003, Oct. 2015. - [54] D. Cheng, Y. Gong, S. Zhou, J. Wang, and N. Zheng, ‘‘Person re-
identification by multi-channel parts-based CNN with improved triplet loss function,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 1335–1344.
Attribute-assisted person re-id
mid-level semantic attribute features替代low-level features
- [9] R. Layne, T. M. Hospedales, S. Gong, and Q. Mary, ‘‘Person re- identification by attributes,’’ in Proc. BMVC, 2012, vol. 2, no. 3, p. 8.
- [10] R. Layne, T. M. Hospedales, and S. Gong, ‘‘Attributes-based re- identification,’’ in Person Re-Identification. Cham, Switzerland: Springer, 2014, pp. 93–117.
learn the latent features as a complement to the semantic features
- [11] P. Peng, Y. Tian, T. Xiang, Y. Wang, and T. Huang, ‘‘Joint learning of semantic and latent attributes,’’ in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2016, pp. 336–353.
- [12] V. Sharmanska, N. Quadrianto, and C. H. Lampert, ‘‘Augmented attribute representations,’’ in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2012, pp. 242–255.
others
- [55] A. Li, L. Liu, K. Wang, S. Liu, and S. Yan, ‘‘Clothing attributes assisted person re-identification,’’ IEEE Trans. Circuits Syst. Video Technol., vol. 25, no. 5, pp. 869–878, May 2015.
- [56] C. Su, S. Zhang, J. Xing, W. Gao, and Q. Tian. (2016). ‘‘Deep attributes driven multi-camera person re-identification.’’ [Online]. Avail- able: https://arxiv.org/abs/1605.03259
- [57] Y.Lin,L.Zheng,Z.Zheng,Y.Wu,andY.Yang.(2017).‘‘Improving person re-identification by attribute and identity learning.’’ [Online]. Available: https://arxiv.org/abs/1703.07220
Domain adaptation
UDA
correlation alignment to match the mean and covariance of S and T distribution
- [58] B. Sun, J. Feng, and K. Saenko, ‘‘Return of frustratingly easy domain adaptation,’’ in Proc. AAAI, 2016, vol. 6, no. 7, p. 8.
domain adaptation ranking SVM
[59] A.J.Ma,J.Li,P.C.Yuen,andP.Li,‘‘Cross-domain person re-identification using domain adaptation ranking SVMs,’’ IEEE Trans. Image Process., vol. 24, no. 5, pp. 1599–1613, May 2015.
Maximum Mean Discrepancy (MMD)
- [60] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola, ‘‘A kernel two-sample test,’’ J. Mach. Learn. Res., vol. 13, pp. 723–773, Mar. 2012.
adversarial methods to transform pixels
- [61] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan, ‘‘Unsupervised pixel-level domain adaptation with generative adversarial networks,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, vol. 1, no. 2, pp. 1–7.
- [62] J. Hoffman et al. (2017). ‘‘Cycada: Cycle-consistent adversarial domain adaptation.’’ [Online]. Available: https://arxiv.org/abs/1711.03213
UDA for person re-id
- [63] W.Deng,L.Zheng,Q.Ye,G.Kang,Y.Yang,andJ.Jiao,‘‘Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person reidentification,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2018, vol. 1, no. 2, pp. 1–6.
- [64] L. Wei, S. Zhang, W. Gao, and Q. Tian, ‘‘Person transfer gan to bridge domain gap for person re-identification,’’ in Proc. CVPR, Jun. 2018, pp. 79–88.
- [65] Z.Zhong,L.Zheng,Z.Zheng,S.Li,andY.Yang,‘‘Camera style adaptation for person re-identification,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jan. 2018, pp. 5157–5166.
- [66] J. Lv, W. Chen, Q. Li, and C. Yang, ‘‘Unsupervised cross-dataset person re-identification by transfer learning of spatial-temporal patterns,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Mar. 2018, pp. 7948–7956.