reid数据集

https://blog.csdn.net/qiuchangyong/article/details/82219775

最近一段时间在做行人重识别方向的研究,行人重识别(Person Re-Identification)作为图像识别领域的一个分支,在实际生活中具有极其重要的意义。目前,城市里的用于公共治安领域的摄像头已经大量部署,几乎到了几十米到几百米一个覆盖的程度,尽管如此,不同的摄像头之间仍然存在无法覆盖的区域。行人再识别的目标就是要确定一个摄像头下发现的目标,在离开摄像头的视野后去哪了,这有点像视频搜索,就是在其他的摄像头采集到的视频中找到目标。目前行人检测的算法如DPM和Fast RCNN已经可以实现在一个图像中圈出行人的目标,即自动标注,不需要人工去标注了,那么行人重识别任务就是要找到最可能匹配待识别目标的候选。目前的行人再识别研究是基于数据集的,就是通过架设几个摄像头,采集行人图像,然后人工标注或自动标注。这些图像一部分用于训练,一部分用于识别。目前的识别率还达不到可以应用的要求,为了提高识别精度,识别算法主要分为两部分,一部分是提取更好的图像特征,另一部分是为了更有效的计算比对不同特征之间的距离。已有一些客观的评价标准,用于比较各种算法的好坏。研究者们也造出了一些数据集,从简单到庞大。后来者可以重用其贡献的成果。

下面是美国波士顿东北大学总结的行人重识别数据集,原文地址(http://robustsystems.coe.neu.edu/sites/robustsystems.coe.neu.edu/files/systems/projectpages/reiddataset.html

 

Person Re-identification Datasets


Robust systems lab

 

Person re-identification has drawn intensive attention in the computer vision society in recent decades. As far as we know, this page collects all public datasets that have been tested by person re-identification algorithms. If you use any of them, please refer to the original licence. If you have any suggestions or you want to include your dataset here, please send the link of the dataset to m...@coe.neu.edu
The last update is 2018-04-16.


×

News

 

DatasetRelease time# identities# cameras# imagesLabel methodCrop sizeMulti-shotTracking sequencesFull frames availability
VIPeR200763221264Hand128X48   
ETH1,2,3200785,35,2818580HandVary
QMUL iLIDS20091192476HandVary  
GRID2009102581275HandVary   
CAVIAR4ReID20117221220HandVary  
3DPeS201119281011HandVary ✔*
PRID20112011934224541Hand128X64✔*
V472011472752HandVary 
WARD20127034786Hand128X48 
SAIVT-Softbio2012152864472HandVary
CUHK01201297123884Hand160X60  
CUHK022013181610(5 pairs)7264Hand160X60  
CUHK032014146710(5 pairs)13164Hand/DPMVary  
RAiD20144346920Hand128X64  
iLIDS-VID2014300242495HandVary 
MPR Drone2014841 Pyramid Features(ACF)Vary 
HDA Person Dataset201453132976Hand/Pyramid Features(ACF)Vary
Shinpuhkan Dataset20142416 Hand128X48 
CASIA Gait Database B2015(*see below)12411 Background subtractionVary
Market150120151501632217Hand/DPM128X64  
PKU-Reid201611421824Hand128X64   
PRW2016932634304Handvary 
Large scale person search201611934s-34574Handvary  
MARS2016126161191003DPM+GMMCP256X128 
DukeMTMC-reID20171812836441HandVary 
DukeMTMC4ReID20171852846261DoppiaVary 
Airport20179651639902ACF128X64  
MSMT172018410115126441Faster RCNNVary  

VIPeR [link]

This dataset contains two cameras, each of which captures one image per person. It also provides the viewpoint angle of each image. Although it has been tested by many researchers, it's still one of the most challenging datasets. Ryan Layne provides the attribute annotation of VIPeR here.

Ref: D. Gray, and H. Tao, "Viewpoint Invariant Pedestrian Recognition with an Ensemble of Localized Features," in Proc. European Conference on Computer Vision (ECCV), 2008.

viper re-identification

ETH [link]

Different from other datasets collecting images from multiple cameras, ETHZ collects images from a moving camera. Although the viewpoint variance is relatively small, it does have considerable illumination variance, scale variance and occlusion.

Ref: W.R. Schwartz, L.S. Davis. Learning Discriminative Appearance-Based Models Using Partial Least Squares. Proceedings of the XXII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'2009), Rio de Janeiro, Brazil, October 11-14, 2009.

eth

QMUL iLIDS [link]

QMUL iLIDS is based on iLIDS MCTS, a dataset collected in an airport during busy time by a multi-camera CCTV system. Almost every identity has four images from two non-overlapping cameras. This dataset has senarios with heavy occlusion and pose variance.

Zheng et al. Associating Groups of People, BMVC 2009

ilids

GRID [link]

GRID is collected by 8 disjoint cameras in a busy underground station. Each identity has two images from different views and there are more images in the gallery set than the probe set. The image quality of this dataset is fairly poor.

Ref: Loy, C. C., Liu, C., & Gong, S. (2013, September). Person re-identification by manifold ranking. In 2013 IEEE International Conference on Image Processing (pp. 3567-3571). IEEE.

GRID

CAVIAR4ReID [link]

This dataset is extracted from a multi-target tracking dataset CAVIAR, which is collected in a shopping mall by two surveillance cameras with overlapped view field. Among 72 identities, 50 of them have images from two camera views and the rest 22 only from one camera. Images for each identity are carefully selected to maximize the resolustion variance.

Ref: Cheng, D. S., Cristani, M., Stoppa, M., Bazzani, L., & Murino, V. (2011, September). Custom Pictorial Structures for Re-identification. In BMVC (Vol. 1, No. 2, p. 6).

caviar

3DPeS [link]

3DPeS dataset is collected by 8 non-overlapped outdoor cameras. Although the original video is provided, researchers always use the selected snapshots to test person re-identification algorithms. It has 3D model for the environment and the calibration data for all cameras. In video sequences, only the bounding boxes of the first appearing frame of each identity are provided.

Ref: Baltieri, D., Vezzani, R., & Cucchiara, R. (2011, December). 3dpes: 3d people dataset for surveillance and forensics. In Proceedings of the 2011 joint ACM workshop on Human gesture and behavior understanding (pp. 59-64). ACM.

3dpes

PRID2011 [link]

PRID dataset has 385 trajectories from camera A and 749 trajectories from camera B. Among them, only 200 people appear in both cameras. This dataset also has a single shot version, which consists with random selected snapshots. Some trajectories are not well-synchronized, which means the person might "jump" between consecutive frames.

Ref: Hirzer, M., Beleznai, C., Roth, P. M., & Bischof, H. (2011, May). Person re-identification by descriptive and discriminative classification. In Scandinavian conference on Image analysis (pp. 91-102). Springer Berlin Heidelberg.

PRID01

 

PRID02

 

PRID03

V47 [link] [Citation]

V47 dataset is collected using two indoor cameras with overlapped field of view. Each identity walks in two different directions (in and out) and is captured in several different viewpoints.

 

V47

WARD [link]

This dataset is collected with three non-overlaping cameras. Each identity has several images in each camera. Although the images seem to be a labeled trajectory, it's not guaranteed by the author.

Ref: Martinel, N., & Micheloni, C. (2012, June). Re-identify people in wide area camera network. In 2012 IEEE computer society conference on computer vision and pattern recognition workshops (pp. 31-36). IEEE.

WARD

SAIVT-Softbio [link]

SAIVT-Softbio is collected by eight existing surveillance cameras. Since it's an uncontrolled collection, most identities only pass through a subset of cameras. This dataset also provides the whole video frames with labeled bounding box on every frame, but the bounding boxes are not very tight for some instances.

Ref: Bialkowski, Alina, Denman, Simon, Lucey, Patrick, Sridharan, Sridha, & Fookes, Clinton B. (2012) A database for person re-identification in multi-camera surveillance networks. In Proceedings of the 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA 12), IEEE, Esplanade Hotel, Fremantle, WA, pp. 1-8.

SAIVT

CUHK01 [link]

CUHK01 dataset contains two images for every identity from each camera. This dataset has one pair disjoint cameras and the image quality of this dataset is relatively good.

Ref: W. Li, R. Zhao and X. Wang, "Human Reidentification with Transferred Metric Learning" in Proceedings of Asian Conference on Computer Vision (ACCV) 2012.

cuhk01

CUHK02 [link]

CUHK02 is an extended dataset from CUHK01. Besides the camera pair in CUHK01, it has four more camera pair settings.

Ref: W. Li and X. Wang, "Locally Aligned Feature Transforms across Views" in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 2013.

cuhk02

CUHK03 [link]

CUHK03 is the first person re-identification dataset that is large enough for deep learning. It provides the bounding boxes detected from deformable part models (DPM) and manually labeling. Person detection quality is relatively good for this dataset.

Ref: Li, W., Zhao, R., Xiao, T., & Wang, X. (2014). Deepreid: Deep filter pairing neural network for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 152-159).

cuhk03d

 

cuhk03l

RAiD [link]

As a relatively new released dataset, RAiD guaranteed each identity has images in all four non-overlaping cameras. Since two cameras are indoor and the other two are outdoor, the illumination variance is considerably large. Images of each identity are collected in a tracking manner, but the order is not always consistent.

Ref: Das, A., Chakraborty, A., & Roy-Chowdhury, A. K. (2014, September). Consistent re-identification in a camera network. In European Conference on Computer Vision (pp. 330-345). Springer International Publishing.

RAiD

iLIDS-VID [link]

Based on the assumption that the real person re-identification system should have the trajectory for each identity, iLIDS-VID dataset extracted 600 trajectories for 300 identities from iLIDS MCTS dataset. Due to the limitation of iLIDS MCTS dataset, iLIDS-VID has extremely heavy occlusion.

Ref: Wang, T., Gong, S., Zhu, X., & Wang, S. (2016). Person Re-Identification by Discriminative Selection in Video Ranking.

ilidsvid1

 

ilidsvid2

 

ilidsvid3

MPR Drone [link]

MPR Drone dataset is not a traditional person person re-identification dataset with images captured across a camera network. Instead, it is collected by a flying drone in both indoor and outdoor environment. Since it only has one camera, the author proposed three different types of evaluation experiments in the original paper. All pedestrian detections are obtained by pyramid feature detection in Piotr Dollar's toolbox. It has two sub-datasets. Dataset 01 has been exhaustively labeled for 113610 detections. Dataset 02 provides the raw frame data for Dataset 01.

Ref: Layne, R., Hospedales, T. M., & Gong, S. (2014, September). Investigating Open-World Person Re-identification Using a Drone. In European Conference on Computer Vision (pp. 225-240). Springer International Publishing.

market1501

HDA+ [link]

HDA dataset is proposed to mimic the real person re-identification system as close as possible. Within it, 85 persons were densely labeled across 13 cameras during 30 mins. In addition to the tight bounding boxes, the author also proivdes the occlusion flag, camera homographies and synchronization. Image qualities are vary from 640x480 to 2560x1600 and FPSs are vary from 1 to 5. A nice evaluation tool is provided to test the re-id algorithm, person detector or both of them. Six different protocols are included to analysis the whole re-id system. Detections from ACF are provided.

Ref: Figueira, D., Taiana, M., Nambiar, A., Nascimento, J., & Bernardino, A. (2014, September). The hda+ data set for research on fully automated re-identification systems. In European Conference on Computer Vision (pp. 241-255). Springer International Publishing.

market1501

Shinpuhkan Dataset [link]

Shinpuhkan dataset was orginally created to test multi-camera tracking methods. Each person has multiple tracklets in different directions within each camera. In total, each identity has 86 annotated tracklets. The image quality seems fairly good comparing with other tranditional re-id datasets.

Ref: Kawanishi, Y., Wu, Y., Mukunoki, M., & Minoh, M. (2014). Shinpuhkan2014: A multi-camera pedestrian dataset for tracking people across multiple cameras. In 20th Korea-Japan Joint Workshop on Frontiers of Computer Vision (Vol. 5, p. 6).

shi

CASIA Gait Database B [link]

CASIA dataset was created in 2005 and originally used to test gait recognition algorithm. In 2015, Liu et.al reused this dataset to test a gait-based person re-identification algorithm. This dataset is collected by 11 overlapped cameras in different view angles from 0 to 180 degree. Each identity also changes the clothing and carrying condition. Instead of providing bounding boxes, the raw video frames and the silhouette of each frame are given.

 

CASIACASIAcCASIAb

Market1501 [link]

It contains a large number of identities and each identity has several images from six dis-joint cameras. This dataset also includes 2793 false alarms from DPM as distractors to mimic the real scenario. Quality of the bounding boxes is worse than CUHK03. Later in the ICCV 2015 release version, 500K distractors are integrated to make this dataset really large scale. In the original paper proposed this dataset, the author also used mAP as an evalution criteria to test the algorithms.

Ref: Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., & Tian, Q. (2015). Scalable person re-identification: A benchmark. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1116-1124).

market

PKU-Reid [link]

PKU-Reid dataset is relatively small comparing with other modern re-id datasets. The key feature of this dataset is that it captures person appearance from all eight orientations in two disjoint cameras.

Ref: Ma, L., Liu, H., Hu, L., Wang, C., & Sun, Q. (2016). Orientation Driven Bag of Appearances for Person Re-identification. arXiv preprint arXiv:1605.02464.

market

PRW [link]

The PRW (Person Re-identification in the Wild) dataset is an extenstion of Maretk1501 dataset. Instead of only provide bounding boxes, the author released the full frames with annotations. Therefore one can evaluate the affact of different person detectors.

Ref: Zheng, L., Zhang, H., Sun, S., Chandraker, M., & Tian, Q. (2016). Person Re-identification in the Wild. arXiv preprint arXiv:1604.02531.

market

Large scale person search [link]

Similar to PRW dataset, the person search dataset is large scale dataset with full frame access and large amount of labeled bounding boxes. It aims to mimic the real scenario of person search. Therefore, to test this dataset, a reliable person detector is needed. To make the dataset more difficult, the gallery part includes frames from hand held camera and movies. Two more subsets, low-resolution subset and occlusion subset, are also released to evalution the affect of those factors.

Ref: Xiao, T., Li, S., Wang, B., Lin, L., & Wang, X. (2016). End-to-End Deep Learning for Person Search. arXiv preprint arXiv:1604.01850.

market

MARS [link]

The MARS (Motion Analysis and Re-identification Set) dataset is an extenstion verion of the Market1501 dataset. It is the first large scale video based person re-id datset. Since all bounding boxes and tracklets are generated automatically, it contains distractors and each identity may have more than one tracklets. Precomputed deep features are also avaliable on the website.

Ref: Zheng, L., Bie, Z., Sun, Y., Wang, J., Su, C., Wang, S., & Tian, Q. (2016, October). Mars: A video benchmark for large-scale person re-identification. In European Conference on Computer Vision (pp. 868-884). Springer International Publishing.

mars

DukeMTMC-reID/DukeMTMC4ReID [link]

The DukeMTMC dataset is a large-scale heavily labeled multi-target multi-camera tracking dataset. In total, more than 2700 people were labeled with unique identities in 8 cameras. With the access to all information (full frames, frame level ground truth, calibration information, etc.), this dataset has a lot of protentials. Based on the released train-validation set, two re-id extension datasets are created. The key difference is the way to generate the bounding boxes. DukeMTMC-reID directly uses the manually labeled ground truth whereas DukeMTMC4ReID adopts Doppia as the person detector.

 

DukeMTMC-reID
Zheng, Zhedong, Liang Zheng, and Yi Yang. "Unlabeled samples generated by gan improve the person re-identification baseline in vitro." arXiv preprint arXiv:1701.07717 (2017).

dukereid

 

DukeMTMC4ReID
Gou, Mengran and Karanam, Srikrishna and Liu, Wenqian and Camps, Octavia and Radke, Richard J. "DukeMTMC4ReID: A Large-Scale Multi-Camera Person Re-Identification Dataset." CVPR Workshops (2017)

duke4reid

Airport [link]

The dataset was created using videos from six cameras of an indoor surveillance network in a mid-sized airport. The cameras cover various parts of a central security checkpoint area and three concourses. Each camera has 768 × 432 pixels and captures video at 30 frames per second. 12-hour long videos from 8 AM to 8 PM were collected from each of these cameras. Under the assumption that each target person takes a limited amount of time to travel through the network, each of these long videos was randomly split into 40 five minute long video clips. Each video clip was then run through a prototype end-to- end re-id system comprised of automatic person detection and tracking algorithms.

Ref: Karanam, S., Gou, M., Wu, Z., Rates-Borras, A., Camps, O., & Radke, R. J. (2018). IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.

airport

MSMT17 [link]

This large scale re-id dataset is collected in a campus with 12 outdoor cameras and 3 indoor cameras. It coveres 4 days with different weather in a month. For each day, 3 one-hour videos are selected from morning, noon and afternoon. Faster RCNN is utilized for pedestrian detection. This dataset is the largest re-id dataset so far. It has similar viewpoint with Market, but much more complicated scenarios.

Ref: Wei, L., Zhang, S., Gao, W., & Tian, Q. (2018). Person Transfer GAN to Bridge Domain Gap for Person Re-Identification. Computer Vision and Pattern Recognition, IEEE International Conference on, 2018

airport


 

 

  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: DukeMTMC-reID是一个用于行人重识别研究的数据集,该数据集由Duke University的Multimedia Lab创建。它包含超过36,000个身份的超过16,000个视频序列和8个摄像头视角的4410个身份的超过2万个图像。这些数据由不同的摄像头拍摄,其中许多图像和视频序列包含遮挡,模糊或低光条件。 DukeMTMC-reID数据集已成为行人重识别领域的基准数据集之一,许多最新的研究工作都在该数据集上进行评估。 ### 回答2: DukeMTMC-reID 数据集是一个用于行人重识别任务的开放源数据集。该数据集是在DukeMTMC数据库的基础上构建而成,该数据库是一个用于多目标追踪和多目标计数的数据库,其中包含数个摄像头下的行人图像序列。DukeMTMC-reID 数据集的目的是提供一个用于行人重识别算法研究和评估的标准基准。 DukeMTMC-reID 数据集包含8个身份标注的行人图像序列,其中有拍摄角度变化和遮挡等挑战因素。这些序列分为训练集和测试集,训练集包含16,522个图像,测试集包含19,842个图像。每个图像都有相应的标签,表示图像所属的行人身份。 除了行人图像序列,DukeMTMC-reID 数据集还提供了行人图像的边界框注释和行人测距注释,这些注释可以用于算法的性能评估和验证。 使用DukeMTMC-reID 数据集进行行人重识别算法的研究和评估,可以帮助改进行人重识别算法的性能。该数据集的挑战在于拍摄角度变化、遮挡和尺度变化等现实场景中常见的问题,因此对于算法的鲁棒性和准确性提出了更高的要求。 总之,DukeMTMC-reID 数据集是一个用于行人重识别算法研究和评估的标准基准,提供了具有挑战性的行人图像序列以及相应的标签和注释,对于改进行人重识别算法的性能具有重要意义。 ### 回答3: DukeMTMC-reID数据集是一个用于行人重识别(reID)研究的数据集。它由杜克大学多个监控摄像头拍摄的8个监控摄像头中的行人图像组成,这些图像涵盖多个场景和条件。数据集中包含了16,522个行人身份的2,228,339张图像,使其成为一个非常大规模的reID数据集。 这个数据集的特点之一是它的复杂性和挑战性。由于摄像头的不同视角、拍摄距离、光照条件和行人之间的遮挡,行人的外观可能在不同的图像中有很大的差异。此外,数据集中还存在一些行人的重复出现,也就是同一个行人在不同的时间和地点出现了多次,这增加了重识别任务的难度。鉴于这些挑战,使用DukeMTMC-reID数据集进行reID研究可以更好地模拟实际监控场景中的情况。 该数据集还提供了行人的标注框和行人身份的标签,使研究人员能够使用这些信息进行模型训练和性能评估。此外,数据集还提供了用于训练和测试集划分的预定义分组,以及用于评估reID性能的指标,如CMC曲线和mAP。 DukeMTMC-reID数据集已经在行人重识别算法的研究中得到了广泛的应用。通过使用该数据集,研究人员可以开发和评估各种reID方法的性能,以提升行人重识别的效果。此外,该数据集还可以用于其他目标检测、行人跟踪和行为识别等相关研究领域。总之,DukeMTMC-reID数据集是一个有价值且挑战性的资源,为行人重识别研究提供了良好的基础。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值