Pose-guided Visible Part Matching for Occluded Person ReID

Abstract

Occluded person re-identification is a challenging task as the appearance varies substantially with various obstacles, especially in the crowd scenario. To address this issue,we propose a Pose-guided Visible Part Matching (PVPM) method that jointly learns the discriminative features with pose-guided attention and self-mines the part visibility in an end-to-end framework. Specifically, the proposed PVPM includes two key components: 1) pose-guided attention (PGA) method for part feature pooling that exploits more discriminative local features; 2) pose-guided visibility predictor(PVP) that estimates whether a part suffers the occlusion or not. As there are no ground truth training annotations for the occluded part, we turn to utilize the characteristic of part correspondence in positive pairs and self-mining the correspondence scores via graph matching. The generated correspondence scores are then utilized as pseudolabels for visibility predictor (PVP). Experimental results on three reported occluded benchmarks show that the proposed method achieves competitive performance to stateof-the-art methods. The source codes are available at https://github.com/hh23333/PVPM

被遮挡的人重新识别是一项具有挑战性的任务,因为外观因各种障碍而有很大差异,尤其是在人群场景中。为了解决这个问题,我们提出了一种姿势引导的可见部分匹配(PVPM)方法,该方法通过姿势引导的注意力联合学习判别特征,并在端到端框架中自我挖掘部分可见性。具体来说,所提出的 PVPM 包括两个关键组成部分:1) 用于部分特征池的姿势引导注意 (PGA) 方法,该方法利用更具辨别力的局部特征; 2)姿态引导的可见性预测器(PVP),用于估计部件是否受到遮挡。由于被遮挡部分没有ground truth训练注释,我们转而利用正部分对应的特性,通过图匹配自挖掘对应分数。然后将生成的对应分数用作可见性预测器 (PVP) 的伪标签。在三个报告的遮挡基准上的实验结果表明,所提出的方法实现了与最先进方法的竞争性能。源代码可在 https://github.com/hh23333/PVPM 获得。


1. Introduction

Person re-identification (ReID) aims to retrieve a probe pedestrian from non-overlapping camera views. It is an important research topic in computer vision field with various applications, such as autonomous driving, video surveillance and activity analysis [26, 19, 11]. Most existing ReID approaches design the matching model with the assumption that the entire body of the pedestrian is available. However, this assumption is hard to be satisfied due to the inevitable occlusions in real-world scenarios. For example, as shown in Figure. 1, a person may be occluded by other pedestrians,static obstacles like trees, walls and cars, etc. Therefore, it is essential to seek an effective method to solve this occluded person re-identification problem.

行人重新识别 (ReID) 旨在从非重叠的摄像机视图中检索探测行人。 它是计算机视觉领域的一个重要研究课题,具有多种应用,例如自动驾驶、视频监控和活动分析 [26, 19, 11]。 大多数现有的 ReID 方法设计匹配模型时假设行人的整个身体都可用。 然而,由于现实世界场景中不可避免的遮挡,这个假设很难得到满足。 例如,如图1所示,一个人可能会被其他行人、树木、墙壁和汽车等静态障碍物遮挡,因此,寻找一种有效的方法来解决这种被遮挡的人重新识别问题至关重要。
图 1. 被遮挡人员重新识别的图示。 红色边界框表示被不同颜色、大小和位置的各种障碍物遮挡的目标人。图 1. 被遮挡人员重新识别的图示。 红色边界框表示被不同颜色、大小和位置的各种障碍物遮挡的目标人。
图 1. 被遮挡人员重新识别的图示。 红色边界框表示被不同颜色、大小和位置的各种障碍物遮挡的目标人。


There are two main challenges for the occluded person ReID task. First, the global image-based supervision for conventional person ReID may involve not only the information of the target person but also the interference of occlusion. The diversified occlusions, such as colors, positions and sizes, enhance the difficulty of getting a robust feature for the target person. Second, the occluded body parts sometimes show more discriminative information while the non-occluded body parts share a similar appearance, leading to the problem of mismatching.

被遮挡的人 ReID 任务有两个主要挑战。 首先,传统行人 ReID 的全局基于图像的监督可能不仅涉及目标人的信息,还涉及遮挡的干扰。 多样化的遮挡,例如颜色、位置和大小,增加了为目标人获取鲁棒特征的难度。 其次,被遮挡的身体部位有时会显示出更多的判别信息,而未被遮挡的身体部位则具有相似的外观,从而导致不匹配的问题。


An intuitive solution is to detect the non-occluded body parts and then match the correspondents separately. As there is no ground truth annotation for the occluded part,most existing methods directly utilize visibility cues from other tasks with different data source, e.g. body mask [1] and pose landmark estimation [16], but suffering a huge data bias without the flexibility on target domain. In this work, we proposed a Pose-guided Visible Part Matching (PVPM) network by directly mining the visible score in a self-learning manner. The concept of the proposed approach is demonstrated in Figure 2. As shown, PVPM includes two main components: a pose-guided part attention (PGA) network and a pose-guided visibility predictor (PVP) in an end-to-end framework. The training of the part visibility predictor is supervised by a pseudo-label obtained by solving a feature correspondence problem via graph matching. In the end, the final score can be computed via the summation of body-part distance aggregation weighted by the visibility score.

一个直观的解决方案是检测未被遮挡的身体部位,然后分别匹配对应者。由于被遮挡部分没有真实注释,大多数现有方法直接利用来自具有不同数据源的其他任务的可见性线索,例如body mask [1] 和姿势地标估计 [16],但在没有目标域灵活性的情况下遭受巨大的数据偏差。在这项工作中,我们通过以自学习的方式直接挖掘可见分数,提出了姿势引导的可见部分匹配(PVPM)网络。所提出方法的概念如图 2 所示。 如图所示,PVPM 包括两个主要组件:端到端框架中的姿势引导零件注意 (PGA) 网络和姿势引导可见性预测器 (PVP)。零件可见性预测器的训练由通过图匹配解决特征对应问题获得的伪标签监督。最后,可以通过可见度分数加权的身体部位距离聚合的总和来计算最终分数。
图 2. 提出的 PVPM 方法的管道。 它由三个关键组件组成:用于部分特征池化的姿态引导注意(PGA)模型、姿态引导可见性预测器(PVP)和用于为 PVP 训练提供伪标签的特征对应模型。 使用了三个损失函数,包括 Lv、Lm 和 Lc。
图 2. 提出的 PVPM 方法的管道。 它由三个关键组件组成:用于部分特征池化的姿态引导注意(PGA)模型、姿态引导可见性预测器(PVP)和用于为 PVP 训练提供伪标签的特征对应模型。 使用了三个损失函数,包括 Lv、Lm 和 Lc。


In conclusion, the main contribution of the proposed method is as following:

  • We propose a Pose-guided Visible Part Matching (PVPM) method that jointly learn the discriminative features with pose-guided attention and predict the part visibility in an end-to-end framework.
  • We train the visibility prediction model under a self-supervised manner, of which its pseudo-label generating process is regard as a feature correspondence problem and is solved via graph matching.
  • The proposed approach achieves superior performance on multiple occlusion datasets including PartialREID [31], Occluded-REID [36] and P-DukeMTMCreID [36].

总之,所提出的方法的主要贡献如下:

  • 我们提出了一种姿势引导的可见部分匹配 (PVPM) 方法,该方法通过姿势引导的注意力联合学习判别特征,并在端到端框架中预测部分可见性。
  • 我们以自监督的方式训练可见性预测模型,其伪标签生成过程被视为特征对应问题,并通过图匹配解决。
  • 所提出的方法在多个遮挡数据集上实现了卓越的性能,包括 PartialREID [31]、Occluded-REID [36] 和 P-DukeMTMCreID [36]。

2. Related Work

Occluded Person ReID. Most existing ReID works [9, 27,14, 35, 4, 10, 30] focus on training the model without taking the occlusions into considerations. However, the occlusion can not be ignored especially in the crowd scenes like airports or hospitals. To address this problem, Zhou et al. [36] propose multi-task losses that force the network to distinguish between simulated occluded samples and non-occluded samples, so as to learn a robust feature representation against occlusion. Besides, a co-saliency network[37]is proposed to train model paying attention to the person body parts. More recently, Miao et al. [16] utilize pose landmarks to disentangle the useful information from the occlusion noise. Although the improvement has been made by introducing the pose landmarks, its untrainable pose-guided region extracting and the predefined landmarks visibility still limit the matching performance. Instead of simply using predefined regions and part visibility that learn from other data sources with limited flexibility and data bias, we try to self-mine the part visibility from target data and adapt pose-guide attention accordingly in a unified framework.

被遮挡的人 ReID。大多数现有的 ReID 工作 [9, 27,14, 35, 4, 10, 30] 专注于训练模型而不考虑遮挡。然而,遮挡也不容忽视,尤其是在机场或医院等人群场景中。为了解决这个问题,Zhou 等人[36] 提出多任务损失,迫使网络区分模拟的遮挡样本和非遮挡样本,从而学习针对遮挡的鲁棒特征表示。此外,提出了一个co-saliency网络[37]来训练关注人体部位的模型。最近,Miao 等人[16]利用姿势地标从遮挡噪声中分离出有用的信息。尽管通过引入姿势地标进行了改进,但其不可训练的姿势引导区域提取和预定义的地标可见性仍然限制了匹配性能。我们不是简单地使用从灵活性和数据偏差有限的其他数据源中学习的预定义区域和零件可见性,而是尝试从目标数据中自我挖掘零件可见性,并在统一框架中相应地调整姿势引导注意力。


Part-based Person ReID. Part-based person ReID approaches exploit local descriptors from different regions to enhance the discriminative ability and robustness of the algorithm. A straightforward way to do this is to slice the person images or feature maps into uniform partitions [27, 24].In [24], Sun et al. partition feature maps into p horizontal stripe and train each part embedding with non-shared classifiers. One can also extract the local features by pose-driven RoI extraction [28, 20], human parsing results [9] or learning attention regions based on appearance feature [10, 29, 15] or pose feature [22]. For example, Zhao et al. [28] propose to utilize pose detection results to generate local region by a manual-designed cropping manner, and then fuse the part features gradually. Kalayeh et al. [9] utilize human semantic parsing results to extract body part features. Suh et al. [22] propose to generate part maps from prior pose information and then aggregate all parts with a bilinear pooling. In [10, 29, 15], they attempt to use appearance-based attention maps to exploit local information. Although the local features are considered in model design, there are no cues for partial occlusion, leading mismatch in the complex environment.

基于部分的人 ReID。基于部分的行人 ReID 方法利用来自不同区域的局部描述符来增强算法的判别能力和鲁棒性。一种直接的方法是将人物图像或特征图切成统一的分区 [27, 24]。在 [24] 中,Sun 等人。将特征映射划分为 p 个水平条纹,并使用非共享分类器训练每个部分嵌入。还可以通过姿势驱动的 RoI 提取 [28, 20]、人体解析结果 [9] 或基于外观特征 [10, 29, 15] 或姿势特征 [22] 学习注意力区域来提取局部特征。例如,赵等人[28]提出利用姿势检测结果通过手动设计的裁剪方式生成局部区域,然后逐渐融合部分特征。卡拉耶等人[9] 利用人类语义解析结果来提取身体部位特征。苏等人[22] 建议从先验姿势信息生成零件图,然后用双线性池聚合所有零件。在 [10, 29, 15] 中,他们尝试使用基于外观的注意力图来利用本地信息。尽管在模型设计中考虑了局部特征,但没有部分遮挡的线索,导致复杂环境中的不匹配。


Self-Supervised Learning. For the specific task of part visibility prediction, it appears that the precise label of each body part is unavailable to be obtained. This motivated us to solve this problem under the self-supervised learning manner. Self-supervised learning is proposed to learn feature from unlabelled data by introducing a so-called pretext task, for which a target objective can be computed with self-generated pseudo-label, such as spatial and temporal structure [8, 17, 25], or context similarity [18, 3]. Noroozi et al. [17] define the pretext task as recognizing the order of the shuffled patches from an image. Caron et al. [3] treat the cluster assignments as pseudo-labels to learn the parameters of ConvNet. Unlike the pretext task designed above, in this work, we generate the pseudo-label for visibility predictor by self-mine which utilizes the characteristic of part correspondence in positive pairs via graph matching.

自监督学习。 对于部位可见性预测的特定任务,似乎无法获得每个身体部位的精确标签。 这促使我们在自监督学习的方式下解决这个问题。 自监督学习被提出通过引入所谓的借口任务从未标记的数据中学习特征,为此可以使用自生成的伪标签计算目的目标,例如空间和时间结构 [8, 17, 25] ,或上下文相似性 [18, 3]。 诺鲁兹等人[17] 将借口任务定义为识别图像中打乱补丁的顺序。 卡伦等人[3] 将集群分配视为伪标签来学习 ConvNet 的参数。 与上面设计的借口任务不同,在这项工作中,我们通过图匹配利用正对中部分对应的特性,通过自我挖掘生成可见性预测器的伪标签。


3. Pose-Guide Visible Part Matching 姿势引导可见零件匹配

In this work, we present a pose-guided visible part matching framework which aggregates local features with the visible scores to solve the mismatching problem for occluded person ReID task. To better understand the proposed method, we demonstrate the pipeline and the training process in Figure 2 and Algorithm Box 1, with the related notations illustrated in Table 1. This framework includes a pose encoder (PE), a pose-guide attention mask generator (PGA), a pose-guided visibility score predictor (PVP) and a feature correspondence model for generating the pseudo-label for training PVP. In Sec. 3.1 and 3.2, we introduce the methodology details of the PGA and PVP modules. In Sec. 3.3, we claim the strategy about how to obtain the pseudo-label of part correspondence to supervise the training of PVP. Last, in Sec. 3.4, we demonstrate the formulation of the loss functions we employed in this method.

在这项工作中,我们提出了一种姿势引导的可见部分匹配框架,该框架将局部特征与可见分数聚合在一起,以解决被遮挡人员 ReID 任务的不匹配问题。 为了更好地理解所提出的方法,我们展示了图 2 和算法框 1 中的管道和训练过程,相关符号如表 1 所示。该框架包括姿势编码器 (PE)、姿势引导注意掩码生成器 ( PGA)、姿势引导的可见性分数预测器 (PVP) 和用于生成用于训练 PVP 的伪标签的特征对应模型。 在3.1和3.2部分,我们介绍了PGA和PVP模块的方法细节。 在3.3部分,我们提出了如何获取部分对应的伪标签来监督PVP训练的策略。 最后,在3.4部分,我们展示了我们在这个方法中使用的损失函数的公式。
在这里插入图片描述
在这里插入图片描述


3.1. Part Features with Pose-Guide Attention
带有姿态引导注意的零件特征

Obviously, discriminative part features play an important role for the circumstance when the target person facing with occlusions. This motivates us to get the body part features by fusing the appearance features with pose-guided attention maps. With a given pedestrian image I, we first extract the appearance feature maps F∈ R
C×H×W via a CNN backbone network, where C, H, W denote the number of pixel in the channel, height, and width dimensions of feature maps, respectively.

显然,在目标人面临遮挡的情况下,判别性部分特征起着重要作用。 这促使我们通过将外观特征与姿势引导的注意力图融合来获得身体部位的特征。 对于给定的行人图像 I,我们首先提取外观特征图 F∈R
C×H×W 通过 CNN 骨干网络,其中 C、H、W 分别表示通道中的像素数、特征图的高度和宽度维度。


The pose-guided attention mechanism consists of three components: pose estimation, pose encoder and part attention generator. We employ the Openpose [2] method for pose estimation to extract the key point heatmaps K and the part affinity fields Lp of each input image. The pose encoder then takes P = K⊕Lp as input and embeds the pose information into a high-level pose feature Fpose = PE(P; θe). For the part attention generator which focuses on a specific body part, a 1 × 1 Convolutional layer and a following Sigmoid function is adopted on pose features Fpose to estimate a stack of 2-dimensional maps A, each element ai h,w in A indicates the degree that the location (h, w) from feature maps F lies in the i-th part:

姿势引导的注意力机制由三个部分组成:姿势估计、姿势编码器和零件注意生成器。我们采用 Openpose [2] 方法进行姿态估计,以提取每个输入图像的关键点热图 K 和部分亲和性字段 Lp。 然后姿势编码器将 P = K ⊕Lp 作为输入,并将姿势信息嵌入到高级姿势特征 Fpose = PE(P; θe) 中。 对于专注于特定身体部位的部位注意力生成器,在姿势特征Fpose上采用1×1卷积层和以下Sigmoid函数来估计二维图A的堆栈,A中的每个元素ai h,w表示特征图 F 中的位置 (h, w) 在第 i 部分的程度:
在这里插入图片描述
where Np is the number of pre-defined parts, θa is the parameters of the convolutional layer. Furthermore, we hope the network could focus on the non-overlapping region so that each part could extract complementary features which are more discriminative and robust when fusing them all.Thus, we only maintain the maximum activation along the first channel for each part map, which is formulated by,

其中 Np 是预定义部分的数量,θa 是卷积层的参数。 此外,我们希望网络能够专注于非重叠区域,以便每个部分在融合它们时都能提取出更具辨别力和鲁棒性的互补特征。因此,我们只保持每个部分图沿第一个通道的最大激活,即制定者,
在这里插入图片描述
is the Hadamard Product, [arg maxi Ai]| C onehot means to get the index of maximum value along the channel dimension and turn it into a one-hot vector at each spatial location.

是Hadamard乘积,[arg max i Ai]|C onehot 表示沿通道维度获取最大值的索引,并将其转换为每个空间位置的one-hot向量。

In the end, the i-th part feature fi can thus be obtained via a part weighted pooling, which is formulated by,

最后,第 i 个部分特征 fi 因此可以通过部分加权池化获得,其公式为:
在这里插入图片描述
where F h,w is the column vector of F at position (h, w), a¯ i h,w denotes the element lies in the location (h,w) of A¯i.

其中 F h,w 是 F 在位置 (h, w) 处的列向量,a¯ i h,w 表示元素位于 A¯i 的位置 (h,w)。


3.2. Pose-Guide Visibility Prediction 姿态引导能见度预测

After representing the pedestrian using part-based features, an intuitive way to calculate the distance is to compute the global part-to-part distances. However, for occluded ReID, some patches appear in one view may not be exposed in other views. Therefore, a reasonable way is to only establish the correspondence between simultaneously visible parts and compute the distance accordingly. We propose to utilize a pose-guide visibility score predictor (PVP) to estimate the visibility for each part.

在使用基于部分的特征表示行人后,计算距离的一种直观方法是计算全局部分到部分的距离。 然而,对于被遮挡的 ReID,在一个视图中出现的一些补丁可能不会在其他视图中暴露。 因此,一个合理的方法是只建立同时可见的部分之间的对应关系,并据此计算距离。 我们建议利用姿势引导可见性分数预测器(PVP)来估计每个部分的可见性。

We implement the PVP method via a four-layer tiny network which consists of a global average pooling layer, a Convolutional layer of 1×1 filter, a BatchNorm layer and a Sigmoid activation layer. With an input pose feature Fpose, we can predict the visibility score through,

我们通过一个四层微型网络实现 PVP 方法,该网络由一个全局平均池化层、一个 1×1 过滤器的卷积层、一个 BatchNorm 层和一个 Sigmoid 激活层组成。 使用输入姿态特征 Fpose,我们可以通过以下方式预测可见性分数,
在这里插入图片描述
When it comes to the testing stage, given a probe image Ip and a gallery image Ig, the distance considering the visibility between them can be calculated as:

在测试阶段,给定一个探测图像 Ip 和一个画廊图像 Ig,考虑到它们之间的可见性,可以计算出它们之间的距离:
在这里插入图片描述
where di is the cosine distance of the i-th part features, vˆi p and vˆi g denote the visibility score of the i-th part of the probe image and gallery image, respectively.

其中di是第i部分特征的余弦距离,vˆi p和vˆi g分别表示探测图像和图库图像的第i部分的可见性分数。


3.3. Pseudo-Label Estimation by Graph Matching
图匹配的伪标签估计

The ground-truth visibility label of each part is usually unavailable. This motivates us to seek a method that can automatically reveal the visible part without further requirement of manually occlusion annotation in a self-supervised way. For a given positive image pair Ip, Ig, we observe that (1) the relevance of a part-pair appears to be high only when both parts in Ip, Ig are visible. (2) the relevance between the edges of two visible parts within the image will also be highly correlated.
Based on these observations, instead of training vˆ directly, we train the product of part visibility scores of positive pairs to approximate their correspondence. Thus, we consider the pseudo-label generation process as a part feature correspondence problem which can be solved by graph matching. For better understanding, we illustrate an example of how to obtain the pseudo-label between two input images in Figure 3.

每个部分的地面实况可见性标签通常不可用。这促使我们寻求一种可以自动显示可见部分的方法,而无需进一步要求以自我监督的方式手动遮挡注释。 对于给定的正图像对 Ip、Ig,我们观察到 (1) 只有当 Ip、Ig 中的两个部分都可见时,部分对的相关性才显得高。 (2)图像内两个可见部分的边缘之间的相关性也会高度相关。
基于这些观察,我们不是直接训练 v^,而是训练正对的部分可见性分数的乘积来近似它们的对应关系。 因此,我们将伪标签生成过程视为可以通过图匹配解决的部分特征对应问题。 为了更好地理解,我们举例说明如何获取图 3 中两个输入图像之间的伪标签。
图 3. 通过图匹配进行伪标签估计。
图 3. 通过图匹配进行伪标签估计。


Specifically, for a given positive pair, we represent them via two graphs Gp = (Vp, Ep) and Gg = (Vg, Eg), where each element Vi and Ei,j denote the parts(nodes) features fi and edges features {fi − fj} respectively. In our task, only one-to-one matching between corresponding nodes of two graphs are adopted. A binary indicator vector v ∈ {0, 1}N P is employed to represent the correspondence of the two input parts from Gp and Gg, where vi set as 1 if the i-th part pair is selected for matching, otherwise 0. The affinity matrix M is conducted with the relational similarity values between edges and nodes where the inner product is used to calculate similarity. Specifically, we encode the compatibility of corresponding two nodes in the diagonal Mii as:

具体来说,对于给定的正对,我们通过两个图 Gp = (Vp, Ep) 和 Gg = (Vg, Eg) 表示它们,其中每个元素 Vi 和 Ei,j 分别表示部分(节点)特征 fi 和边特征 { fi - fj} 。 在我们的任务中,仅采用两个图的对应节点之间的一对一匹配。 一个二元指示向量 v ∈ {0, 1}NP 用于表示来自 Gp 和 Gg 的两个输入部分的对应关系,其中如果选择第 i 个部分对进行匹配,则 vi 设置为 1,否则为 0。 矩阵 M 是用边和节点之间的关系相似度值进行的,其中内积用于计算相似度。 具体来说,我们对对角线Mii中对应的两个节点的兼容性编码为:
在这里插入图片描述
and encode the compatibility of corresponding two edges features in the non-diagonal component Mij as:

并将非对角线分量 Mij 中对应的两个边缘特征的兼容性编码为:
在这里插入图片描述
where Mˆi,j is the moving average of Mi,j .

其中 Mˆi,j 是 Mi,j 的移动平均值。

Same as another graph matching method [21], we model graph matching as an Integer Quadratic Programming (IPQ) problem and incorporate a regularization term on the number of activated nodes:

与另一种图匹配方法 [21] 相同,我们将图匹配建模为整数二次规划 (IPQ) 问题,并在激活节点的数量上加入正则化项:在这里插入图片描述
where λ is a balanced parameter and Mˆdiag is the moving average of diagonal components of M. We set λ¯ to be proportional to the moving average of parts similarity to make it more adaptive to data as well as narrow down the scope of hyper-parameter selection. By optimizing Eq.(9), we can obtain the optimal solution v∗ which indicates which part pair is appropriate to be matched. Then it can be taken as the supervision for optimizing PVP.

其中 λ 是平衡参数,Mˆdiag 是 M 的对角线分量的移动平均值。我们将 λ¯ 设置为与零件相似度的移动平均值成正比,使其更适应数据并缩小超参数的范围选择。通过优化方程(9),我们可以得到最优解 v∗,它表明哪个部分对适合匹配。 那么它可以作为优化PVP的监督。


3.4. Loss Function

Three loss functions are employed to optimize the proposed method, including the visibility verification loss Lv for self-supervised visibility learning, the part-matching loss Lm for enhancing the relevance between corresponding parts, and the identity classification loss Lc for maintaining the discriminative power of each part feature. Therefore, the overall loss L can be formulated as,

采用三个损失函数来优化所提出的方法,包括用于自监督可见性学习的可见性验证损失 Lv、用于增强相应部件之间相关性的部件匹配损失 Lm 以及用于保持每个部分特征的判别力的身份分类损失 Lc。因此,总损失L可以表示为,
在这里插入图片描述
Visibility Verification Loss Lv. We impose a Binary Cross Entropy loss for PVP module in training phrase with the self-supervision signal v∗, which is obtained via strategy mentioned in Sec. 3.3. Specifically, the product of part visibility scores of the input probe and gallery Ip, Ig are trained to approximate matching vector, which is formulated by:

能见度验证损失 Lv。 我们在具有自监督信号 v* 的训练短语中对 PVP 模块施加二元交叉熵损失,这是通过第 3.3 节中提到的策略获得的。具体来说,输入探针和画廊 Ip、Ig 的部分可见性分数的乘积被训练为近似匹配向量,其公式为:
在这里插入图片描述
where v i p and v i gcorrespond to the i-th part visibility score of probe and gallery images respectively.

其中 v i p 和 v i g 分别对应于探测图像和画廊图像的第 i 部分可见性分数。

Part Matching Loss Lm. After obtaining the optimal visibility score v∗, continue to optimize the matching quality function according to M enables to enhance the intra-part consistence. A part-based matching loss which is similar to the form of Eq.9 is employed here. By fixing v with value v∗, the matching loss is formulated as:

部分匹配损失 Lm。 得到最优可见度分数v∗后,继续根据M使匹配质量函数优化,以增强部分内部一致性。 这里采用了类似于 Eq.9 形式的基于部分的匹配损失。通过将 v 固定为值 v*,匹配损失被公式化为:
在这里插入图片描述
Among this loss function, the first term could enhance the intra-part consistence and the second term enforces the network to extract complementary features from different parts, where λ ’ ∈ R P is defined as,

在这个损失函数中,第一项可以增强部分内部的一致性,第二项强制网络从不同部分提取互补特征,其中 λ '∈ R P 定义为,
在这里插入图片描述
and Sp and Sg corresponding to the inter-part feature similarity matrix of probe and gallery, respectively.

Sp和Sg分别对应probe和gallery的inter-part特征相似度矩阵。

Classification Loss Lc. To introduce discriminative power into the proposed network, we adopt a classification loss as the objective function. Following the construction of RPP [24], we fix the pre-trained PCB classifiers to maintain the knowledge learned under the uniform partition. Then the classification loss can be formulated as,

分类损失 Lc。 为了将判别力引入所提出的网络,我们采用分类损失作为目标函数。在构建 RPP [24] 之后,我们修复了预训练的 PCB 分类器以保持在统一分区下学习的知识。 那么分类损失可以表示为,
在这里插入图片描述
where C E is the Cross-Entropy loss, yˆi is the prediction of the i-th part classifier, and y is the ground-truth ID.

其中 C E 是交叉熵损失,yˆi 是第 i 个部分分类器的预测,y 是真值 ID。


4. Experiments

4.1. Datasets and Settings

Datasets. For experimental evaluation, we conduct experiments on two small-scale and one large-scale ReID benchmarks, including the Occluded-REID [36], the PartialREID [31], and the large P-DukeMTMC-reID [36] dataset.Each reported occluded dataset is partitioned into two parts: the occluded person images and full-body person images. For model pre-training, we train the networks on the Market-1501 [30] dataset.

数据集。 对于实验评估,我们在两个小规模和一个大规模 ReID 基准上进行实验,包括 Occluded-REID [36]、PartialREID [31] 和大型 P-DukeMTMC-reID [36] 数据集。每个报告被遮挡的数据集分为两部分:被遮挡的人物图像和全身人物图像。对于模型预训练,我们在 Market-1501 [30] 数据集上训练网络。

1)Occluded-REID [36] images are captured by mobile camera equipments in campus, including 2,000 annotated images belonging to 200 identities. Among the dataset,each person consists of 5 full-body person images and 5 occluded person images with various occlusions.
2)Partial-REID [31] includes 900 images of 60 pedestrians. Each person has 5 full-body person images, 5 occluded person images and 5 manually cropped partial person images from the occluded ones. In this work, we only use the full-body and occluded person images for evaluation.
3)P-DukeMTMC-reID [36] is a modified version based on DukeMTMC-reID dataset [32]. There are 12,927 images (665 identifies) in training set, 2,163 images (634 identities) for querying and 9,053 images in the gallery set.
4)Market-1501 [30] contains 32,668 labelled images of 1,501 identities observed from 6 cameras. The dataset is split into training set with 12,936 images of 751 identities and used for model pre-training only.

1)Occluded-REID [36] 图像由校园移动摄像设备拍摄,包括属于200个身份的2,000 张注释图像。 在数据集中,每个人由 5 张全身人像和 5 张不同遮挡的被遮挡人像组成。
2)Partial-REID [31] 包括 60 个行人的 900 张图像。 每个人有 5 张全身人像、5 张被遮挡的人像和5张从被遮挡的人像中手动裁剪的部分人像。 在这项工作中,我们只使用全身和被遮挡的人物图像进行评估。
3)P-DukeMTMC-reID [36] 是基于 DukeMTMC-reID 数据集 [32] 的修改版本。 训练集中有 12,927 张图像(665 个标识),用于查询的 2,163 张图像(634 个标识)和图库集中的 9,053 张图像。
4)Market-1501 [30] 包含 32,668 张标记图像,包含从 6 个摄像头观察到的 1,501 个身份。 数据集被分成具有 751 个身份的 12,936 张图像的训练集,仅用于模型预训练。

Evaluation Protocols. We report the Cumulated Matching Characteristics (CMC) [5] and mean Average Precision (mAP) [30] value for the proposed approach. The evaluation package is provided by [33], and all the experimental results are performed in a single query setting.
Implementation Details. We take all occluded person images as probe set and full-body person images as gallery set on all three reported datasets. Specifically, for the Occluded-REID [36] and Partial-REID [31] datasets, due to the absence of the same prescribed split of training and test set, all the images are adopted for testing. With all training images resized as 384 × 128, we employed ResNet-50 [6] which is pre-trained with the same setting as PCB [24] to extract appearance features. This feature is then followed by a pose-guided attention pooling operation which generates Np part features, where Np is set as 6 by default. For pose estimation, we adopt the OpenPose [2] method pre-trained on the COCO dataset [13], which generates 18 key-point heatmaps K and 38 part affinity fields Lp. The pro-posed PVP and PGA method is trained at a learning rate of 0.002 via the SGD optimizer. The training batch size, the training epoch, and the coefficient λ are set as 32, 30, and 0.9, respectively. This code is implemented under NVIDIA 1080Ti GPU environment and Pytorch platform.

评估协议。我们报告了所提出方法的累积匹配特征 (CMC) [5] 和平均精度 (mAP) [30] 值。评估包由 [33] 提供,所有实验结果均在单个查询设置中执行。
实施细节。我们将所有被遮挡的人物图像作为探针集,将全身人物图像作为所有三个报告数据集上的图库集。具体来说,对于 Occluded-REID [36] 和 Partial-REID [31] 数据集,由于没有相同的训练和测试集的规定分割,所有图像都用于测试。将所有训练图像调整为 384 × 128,我们采用 ResNet-50 [6],它使用与 PCB [24] 相同的设置进行预训练,以提取外观特征。该特征之后是姿势引导的注意力池操作,该操作生成 Np 部分特征,其中 Np 默认设置为 6。对于姿态估计,我们采用在 COCO 数据集 [13] 上预训练的 OpenPose [2] 方法,该方法生成 18 个关键点热图 K 和 38 个部分亲和力字段 Lp。提出的 PVP 和 PGA 方法通过 SGD 优化器以 0.002 的学习率进行训练。训练批次大小、训练时期和系数 λ 分别设置为 32、30 和 0.9。这段代码是在NVIDIA 1080Ti GPU环境和Pytorch平台下实现的。


4.2. Performance under Transfer Setting
传输设置下的性能

Performance comparison under transfer setting is conducted by directly utilizing the model trained on Market1501 [30] without any further optimization.

转移设置下的性能比较是通过直接利用在 Market1501 [30] 上训练的模型进行的,无需进一步优化。

Comparison with Holistic Methods. The performance comparison with the holistic methods are illustrated in the first group of Table 2.Among these methods, HACNN [10] introduces the appearance-based attention mechanism into model training. Compared to the Part Bilinear [22] method which utilizes the pose information to improve the re-identification performance, the PCB(+RPP) [24] method propose to use a refined part pooling strategy. The ‘+Aug’ corresponds to the result when training the PVPM model with images augmented with random occlusions to solve the data unbalance problem in the occluded training set. From the table, we can observe that the proposed method outperforms those holistic approaches by a large margin, with rank-1 accuracy surpasses the second-best holistic method by around 10% for all three reported benchmarks. This result may validate that, 1) it is essential to propose a specifically designed framework for the occluded ReID task; 2) matching with the visible parts shows better performance rather than using all parts.

与整体方法的比较。与整体方法的性能比较如表 2 的第一组所示。在这些方法中,HACNN [10] 将基于外观的注意力机制引入模型训练。与利用姿态信息提高重识别性能的 Part Bilinear [22] 方法相比,PCB(+RPP) [24] 方法建议使用精细的零件池化策略。 ‘+Aug’对应于用随机遮挡增强的图像训练PVPM模型以解决遮挡训练集中的数据不平衡问题时的结果。从表中,我们可以观察到,所提出的方法大大优于那些整体方法,对于所有三个报告的基准,排名 1 的准确度超过第二好的整体方法约 10%。这个结果可以证明,1)为被遮挡的 ReID 任务提出一个专门设计的框架是必不可少的; 2)与可见部分匹配显示更好的性能,而不是使用所有部分。
表 2.在三个报告的数据集上与整体和封闭方法的性能比较。 第一/第二最好的结果是红色和蓝色。
表 2.在三个报告的数据集上与整体和封闭方法的性能比较。 第一/第二最好的结果是红色和蓝色。

Comparison with Occluded Methods. We show the performance comparison with two specifically designed occluded ReID approaches in the second group of Table 2.The Teacher-S [37] proposes to train networks to learn a global feature with two auxiliary tasks, which would make networks paying more attention to person body parts. The PGFA [16] proposes a hard part matching method via a fixed region selection strategy and hand-crafted part visibility judgement method. Compared to these two approaches, our PVPM model achieves 70.4%, 78.3% and 51.5% at rank-1 on the Occluded ReID [36], Partial-REID [31] and P-DukeMTMC-reID [36] dataset, outperforming them by a large margin. This large performance improvement may be drawn from three aspects: 1) part matching works better for occluded ReID task rather than global feature learning; 2) a trainable part visibility prediction model could benefit more than the hand-crafted strategy; 3) training a high-level pose features can provide better guidance for person retrieval compared to simply fuse features with pose key-point heatmaps;

与遮挡方法的比较。我们在表 2 的第二组中展示了与两种专门设计的遮挡 ReID 方法的性能比较。Teacher-S [37] 建议训练网络以学习具有两个辅助任务的全局特征,这将使网络付出代价更注意人的身体部位。 PGFA [16] 通过固定区域选择策略和手工制作的零件可见性判断方法提出了一种硬零件匹配方法。与这两种方法相比,我们的 PVPM 模型在 Occluded ReID [36]、Partial-REID [31] 和 P-DukeMTMC-reID [36] 数据集上分别达到了 70.4%、78.3% 和 51.5%,优于它们很大的差距。这种巨大的性能提升可以从三个方面得出:1)部分匹配对遮挡的 ReID 任务效果更好,而不是全局特征学习; 2)可训练的零件可见性预测模型比手工制作的策略更能受益; 3)与简单地将特征与姿势关键点热图融合相比,训练高级姿势特征可以为人物检索提供更好的指导;

Comparison with Partial Methods. Compared to occluded ReID task, partial ReID aims to solve the matching problem with the images manually cropped via a bounding box from the original images. This may result in image distortion and misalignment, and the occlusions still can not be totally removed, therefore, increasing the matching difficulties. In this section, four partial ReID methods are compared with the proposed PVPM in Table 3 on the Partial ReID dataset [31], listing the rank-1, rank-3 matching rates.We also demonstrate whether the model needs to match persons with the manually occlusion removed images or the original pictures. As can be seen, compared to those partial ReID methods, our PVPM+Aug model arrives 78.3% at rank-1, outperforming the second-best VPM [23] approach by 10.6%. Note that, our PVPM approach does not require to pre-process the images while testing, which shows better practicability in the real-world scenes.

与部分方法的比较。 与 occluded ReID 任务相比,部分 ReID 旨在解决通过从原始图像中通过边界框手动裁剪的图像的匹配问题。 这可能会导致图像失真和错位,并且仍然无法完全去除遮挡,从而增加了匹配难度。 在本节中,在部分 ReID 数据集 [31] 上将四种部分 ReID 方法与表 3 中提出的 PVPM 进行比较,列出了 rank-1、rank-3 匹配率。我们还演示了该模型是否需要将人与 手动遮挡去除图像或原始图片。 可以看出,与那些部分 ReID 方法相比,我们的 PVPM+Aug 模型在排名 1 时达到了 78.3%,比第二好的 VPM [23] 方法高出 10.6%。 请注意,我们的 PVPM 方法在测试时不需要对图像进行预处理,这在现实场景中显示出更好的实用性。
表 3. 在 PartialREID 数据集上与部分 ReID 方法的比较。 “手动裁剪”表示该方法使用原始遮挡图像或手动去除遮挡图像进行匹配。
表 3. 在 PartialREID 数据集上与部分 ReID 方法的比较。 “手动裁剪”表示该方法使用原始遮挡图像或手动去除遮挡图像进行匹配。


4.3. Performance under Supervised Setting
监督设置下的表现

For the large-scale dataset P-DukeMTMC-reID [36], we further run experiments to evaluate the performance when optimizing the model with the target training set. The results of two methods, IDE [30] as well as our part-based baseline method PCB [24] are demonstrated in Table 4.As can be observed, our PVPM method achieves 85.1% at rank-1, which surpasses the baseline method by 5.7%.This further illustrates our model superiority under the supervised setting for occluded person ReID.

对于大规模数据集 P-DukeMTMC-reID [36],我们进一步运行实验以评估使用目标训练集优化模型时的性能。 IDE [30] 和我们基于零件的基线方法 PCB [24] 两种方法的结果如表 4 所示。 可以观察到,我们的 PVPM 方法在排名 1 时达到了 85.1%,超过了基线方法 5.7%。这进一步说明了我们的模型在被遮挡人 ReID 的监督设置下的优越性。
表 4. 监督设置下 P-DukeMTMC-reID 数据集的性能。
表 4. 监督设置下 P-DukeMTMC-reID 数据集的性能。


4.4. Algorithm Analysis

In this subsection, we conduct experiments to thoroughly verify the effectiveness of the components of the PoseGuided Attention (PGA) mechanism, the Pose-guided Visibility Prediction (PVP) model, the part matching loss Lm,the graph matching model and the augmented training samples with randomly generated occlusions. The experimental results on the reported three benchmarks are shown in Table 5. The ‘Baseline’ is the result of directly employing the PCB [24] model. The ‘PGA only’ means that we only use the pose-guided part features without further employment of part visibility computation. The ‘PVP only’ corresponds to the result that assigning each uniform part features with a visibility score without the soft pose-guided attention mask. The ‘-Lm’ is the result when removing the part matching loss from the whole loss function. The ‘-thre’ is the result when inferring pair visibility by thresholding their similarity. The ‘+Aug’ indicates that we augment the training samples by randomly replacing a region in the image with a background patch, which is motivated by [36].

在本小节中,我们通过实验来彻底验证姿势引导注意 (PGA) 机制、姿势引导可见性预测 (PVP) 模型、零件匹配损失 Lm、图匹配模型和增强训练样本的组件的有效性具有随机生成的遮挡。所报告的三个基准的实验结果如表 5 所示。“基线”是直接采用 PCB [24] 模型的结果。 “PGA only”意味着我们只使用姿势引导的零件特征,而不进一步使用零件可见性计算。 “PVP only”对应于在没有软姿势引导注意力掩码的情况下为每个统一部分特征分配可见性分数的结果。 “-Lm”是从整个损失函数中去除部分匹配损失后的结果。 “-thre”是通过阈值化它们的相似性来推断对可见性时的结果。 “+Aug”表示我们通过用背景补丁随机替换图像中的一个区域来增加训练样本,这是由 [36] 激发的。
表 5. 不同组件设置的性能比较。
表 5. 不同组件设置的性能比较。

From Table 5, we can observe that the employment of PGA block can achieve better performance. This suggests that the utilization of pose-guide attention do benefit the occluded re-identification task. When comparing the result of ‘PVP only’ and ‘baseline’, it can be easily drawn that computing a weighted distance according to the part visibility score improves the rank-1 performance by 5.9%, 7.7%, 3.2% on the three reported datasets. Note that, our graph model method outperforms the thresholding method, which demonstrates our model advantage as it considers the body part-to-part correlations while inferring their correspondence. What is more, when we remove the Lm from the entire loss function, performance drops by around 1-2% at rank-1 accuracy, validating its effectiveness. Besides, the result of the ‘+Aug’ operation demonstrates that the augmented occluded training samples can make a contribution to performance gain.

从表 5 可以看出,采用 PGA 块可以获得更好的性能。这表明姿势引导注意力的利用确实有利于被遮挡的重新识别任务。当比较“PVP only”和“baseline”的结果时,可以很容易地得出,根据零件可见性分数计算加权距离可以将三个报告数据集的排名 1 性能提高 5.9%、7.7%、3.2% .请注意,我们的图模型方法优于阈值方法,这证明了我们的模型优势,因为它在推断它们的对应关系时考虑了身体部位之间的相关性。更重要的是,当我们从整个损失函数中删除 Lm 时,性能在 rank-1 精度下下降约 1-2%,验证了其有效性。此外,“+Aug”操作的结果表明,增强的遮挡训练样本可以对性能提升做出贡献。


4.5. Analysis of Pose Cues 姿势线索分析

Compared with the appearance cues, the pose-guided cues sometimes can provide more reliable information for occluded occasions. To validate the advantage of utilizing human pose information for part region generation and part visibility score prediction, we compare our PVPM model with an appearance-based part refine method RPP [24]. The quantitative result is illustrated in Table 6. As can be seen,the appearance-based RPP [24] method does not achieve a performance boost on the occluded dataset comparing with the baseline method PCB [24] (in Table.5). Furthermore, we train a model with the same setting as our selfsupervised framework but replace the PGA and PVP block with two appearance-based module, RPP [24] and a part visibility score predictor (VSP), respectively. This strategy is defined as ‘R+S’. The further employment of the VSP method makes the performance drop further. For better viewing, we demonstrate the visualization result in Figure 4, including both the part maps and the predicted part visibility score. The visualization part maps show that the pose-guided attention mask can focus more on the regions which are not occluded. Therefore, we can deduce that, compared with pose cues, the appearance cues can not offer enough insight especially when facing new obstructions.

与外观线索相比,姿势引导的线索有时可以为被遮挡的场合提供更可靠的信息。为了验证利用人体姿势信息进行零件区域生成和零件可见性分数预测的优势,我们将 PVPM 模型与基于外观的零件细化方法 RPP [24] 进行了比较。定量结果如表 6 所示。可以看出,与基线方法 PCB [24](表 5)相比,基于外观的 RPP [24] 方法在遮挡数据集上没有实现性能提升。此外,我们训练了一个与我们的自监督框架具有相同设置的模型,但分别用两个基于外观的模块 RPP [24] 和零件可见性分数预测器 (VSP) 替换了 PGA 和 PVP 块。该策略被定义为“R+S”。进一步采用 VSP 方法使性能进一步下降。为了更好地查看,我们展示了图 4 中的可视化结果,包括零件图和预测的零件可见性分数。可视化部分图表明姿势引导的注意力掩码可以更多地关注未被遮挡的区域。因此,我们可以推断出,与姿势线索相比,外观线索无法提供足够的洞察力,尤其是在面对新的障碍物时。
图 4. 根据差异提示生成的零件图和可见性分数的可视化。 每张图片下方的数字表示该部分的预测可见度分数。 每行上的图片分别表示通过我们的 PVPM 模型、基于外观的精细零件模型(RPP [24])以及 RPP 和可见性分数预测器(VSP)的组合生成的零件图

图 4. 根据差异提示生成的零件图和可见性分数的可视化。 每张图片下方的数字表示该部分的预测可见度分数。 每行上的图片分别表示通过我们的 PVPM 模型、基于外观的精细零件模型(RPP [24])以及 RPP 和可见性分数预测器(VSP)的组合生成的零件图
表 6. 从不同类型的线索生成零件图和可见性分数的比较结果:基于外观或姿势引导。 PVPM 是提出的姿势引导方法,“RPP”表示从 [24] 中的均匀分割中细化零件图。“R+S”表示当我们进一步使用基于外观的可见性预测器和“RPP”时的结果。
表 6. 从不同类型的线索生成零件图和可见性分数的比较结果:基于外观或姿势引导。 PVPM 是提出的姿势引导方法,“RPP”表示从 [24] 中的均匀分割中细化零件图。“R+S”表示当我们进一步使用基于外观的可见性预测器和“RPP”时的结果。


4.6. Parameter Analysis 参数分析

The Impact of regularization coefficient λ. λ is the regularization coefficient of Eq.9. Small λ will weaken the discriminative ability of the visibility predictor, leading to all parts thought as visible. But a large λ may mislead the visibility predictor taking some local regions as unobservable.In this section, we compare the performance with different settings of λ, which varies from 0.6 to 1. We show the rank1 accuracy and mAP variations in Figure.5. As can be seen, the performance reaches the peak value at 0.9, and drop a little bit with λ increasing to 1. This performance trend just validates our expectation of the coefficient λ.

正则化系数λ的影响。 λ 是方程 9 的正则化系数。小 λ 会削弱可见性预测器的判别能力,导致所有部分都被认为是可见的。但是较大的 λ 可能会误导可见性预测器,将某些局部区域视为不可观察的。在本节中,我们比较了不同 λ 设置的性能,该设置从 0.6 到 1。我们在图 5 中显示了 rank1 精度和 mAP 变化。可以看出,性能在 0.9 处达到峰值,并随着 λ 增加到 1 略有下降。这种性能趋势正好验证了我们对系数 λ 的预期。
图 5. 具有不同 λ 设置的 Rank-1 精度和 mAP。 红线、绿线和蓝线分别对应 Partial ReID、Occluded ReID 和 p-DukeMTMC-reID 数据集的结果
图 5. 具有不同 λ 设置的 Rank-1 精度和 mAP。 红线、绿线和蓝线分别对应 Partial ReID、Occluded ReID 和 p-DukeMTMC-reID 数据集的结果

The Impact of Part Number Np. Np determines the granularity of the part feature. We conduct several experiments by setting Np from 2 to 8, and demonstrate the result in Figure 6, including the rank-1 matching rate and the mAP value. As can be seen, with Np increases, the performance keeps improves at first, and reaches the peak when Np arrives 4. However, the performance starts to drop with the part number continuing enlarging to 8. We suggest that this phenomenon may be drawn by the over-increased Np, making the small parts becoming similar to each other and decreasing the discriminative ability of our model.

零件编号 Np 的影响。 Np 确定零件特征的粒度。我们通过将 Np 从 2 设置为 8 进行了几次实验,并在图 6 中演示了结果,包括 rank-1 匹配率和 mAP 值。可以看出,随着 Np 的增加,性能开始保持提升,当 Np 达到 4 时达到峰值。但是,随着零件数量继续增加到 8,性能开始下降。我们认为这种现象可能是由过度增加的 Np,使小部分变得彼此相似并降低我们模型的判别能力。
图 6. Rank-1 精度和 mAP 与不同零件编号 Np 设置的比较。
图 6. Rank-1 精度和 mAP 与不同零件编号 Np 设置的比较。


5. Conclusion

In this paper, we propose a novel Pose-guided Visible Part Matching (PVPM) algorithm for occlusion ReID task.The proposed PVPM jointly considers the discriminative pose-guided attention and part visibility in a unified framework. Unlike most existing methods which utilize visibility cues from other data source directly, we explore the part correspondence on target data and self-mine visibility score via graph matching. A self-learning method was introduced for pseudo label generation and optimize visibility predictor without data bias. Sufficient experimental results on the three reported occluded datasets demonstrate the superiority of the proposed model for occluded person ReID task.

在本文中,我们为遮挡 ReID 任务提出了一种新颖的姿势引导可见部分匹配 (PVPM) 算法。所提出的 PVPM 在统一框架中联合考虑了有区别的姿势引导注意和部分可见性。 与大多数直接利用来自其他数据源的可见性线索的现有方法不同,我们通过图形匹配探索目标数据和自挖掘可见性分数的部分对应关系。 引入了一种自学习方法来生成伪标签并在没有数据偏差的情况下优化可见性预测器。 在三个报告的遮挡数据集上的足够实验结果证明了所提出的模型对于遮挡人员 ReID 任务的优越性。

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值