图像分割(2)

引言

1、人脸检测

特征模板内有白色和黑色两种矩形, 并定义该模板的特征值为白色矩形内的像素和减去黑色矩形内的像素和。

Haar-like特征+级联分类器

Haar-like特征

Haar-like特征模板

Haar-like模板可表示出人脸的某些特征。 例如: 中间一幅表示眼睛区域的颜色比脸颊区
域的颜色深; 右边一幅表示鼻梁两侧比鼻梁的颜色要深。

Haar-like特征的数量

Haar级联分类器

Boosting分类器示意

弱分类器和强分类器

o 一个弱学习器的要求仅仅是: 它能够以稍低于50%的错误率来区分人脸和非人脸图像。
o 训练一个弱分类器就是在当前权重分布的情况下, 确定f 的最优阈值, 使得这个弱分类器对所有训练样本的分类误差最低。
o 最后将每轮得到的最佳弱分类器按照一定方法提升(Boosting) 为强分类器。

级联分类器的检测机制

o 级联分类器中每一个强分类器都是对于“非人脸”(即负样本) 更敏感, 使得每次被强分类器拒绝的, 都几乎一定不是人脸。 经过所有强分类器考验的, 才是“人脸”。
o 一幅图像中待检测的区域很多都是负样本,只有正样本才会送到下一个强分类器进行再次检验。 这样由级联分类器在分类器的初期就抛弃了很多负样本的复杂检测。所以, 级联分类器的速度是非常快的。

2、行人检测

梯度

3、HOG+SVM

HOG

HOG:对比度归一化

HOG的步骤

SVM基本模型

松弛变量

映射到高维空间

用SVM区分行人与非行人的HOG特征

4、DPM

DPM检测流程

o 对于任意一张输入图像, 提取其DPM特征图, 然后将原始图像进行高斯金字塔上采样, 然后提取其DPM特征图。
o 对于原始图像的DPM特征图和训练好的Root filter做卷积操作, 从而得到Root filter的响应图。
o 对于2倍图像的DPM特征图, 和训练好的Part filter做卷积操作, 从而得到Part filter的响应图。
o 然后对其精细高斯金字塔的下采样操作, 这样Rootfilter的响应图和Part filter的响应图就具有相同的分辨率了。
o 然后将其进行加权平均, 得到最终的响应图。 亮度越大表示响应值越大。

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Human parsing has been extensively studied recently (Yamaguchi et al. 2012; Xia et al. 2017) due to its wide applications in many important scenarios. Mainstream fashion parsing models (i.e., parsers) focus on parsing the high-resolution and clean images. However, directly applying the parsers trained on benchmarks of high-quality samples to a particular application scenario in the wild, e.g., a canteen, airport or workplace, often gives non-satisfactory performance due to domain shift. In this paper, we explore a new and challenging cross-domain human parsing problem: taking the benchmark dataset with extensive pixel-wise labeling as the source domain, how to obtain a satisfactory parser on a new target domain without requiring any additional manual labeling? To this end, we propose a novel and efficient crossdomain human parsing model to bridge the cross-domain differences in terms of visual appearance and environment conditions and fully exploit commonalities across domains. Our proposed model explicitly learns a feature compensation network, which is specialized for mitigating the cross-domain differences. A discriminative feature adversarial network is introduced to supervise the feature compensation to effectively reduces the discrepancy between feature distributions of two domains. Besides, our proposed model also introduces a structured label adversarial network to guide the parsing results of the target domain to follow the high-order relationships of the structured labels shared across domains. The proposed framework is end-to-end trainable, practical and scalable in real applications. Extensive experiments are conducted where LIP dataset is the source domain and 4 different datasets including surveillance videos, movies and runway shows without any annotations, are evaluated as target domains. The results consistently confirm data efficiency and performance advantages of the proposed method for the challenging cross-domain human parsing problem. Abstract—This paper presents a robust Joint Discriminative appearance model based Tracking method using online random forests and mid-level feature (superpixels). To achieve superpixel- wise discriminative ability, we propose a joint appearance model that consists of two random forest based models, i.e., the Background-Target discriminative Model (BTM) and Distractor- Target discriminative Model (DTM). More specifically, the BTM effectively learns discriminative information between the target object and background. In contrast, the DTM is used to suppress distracting superpixels which significantly improves the tracker’s robustness and alleviates the drifting problem. A novel online random forest regression algorithm is proposed to build the two models. The BTM and DTM are linearly combined into a joint model to compute a confidence map. Tracking results are estimated using the confidence map, where the position and scale of the target are estimated orderly. Furthermore, we design a model updating strategy to adapt the appearance changes over time by discarding degraded trees of the BTM and DTM and initializing new trees as replacements. We test the proposed tracking method on two large tracking benchmarks, the CVPR2013 tracking benchmark and VOT2014 tracking challenge. Experimental results show that the tracker runs at real-time speed and achieves favorable tracking performance compared with the state-of-the-art methods. The results also sug- gest that the DTM improves tracking performance significantly and plays an important role in robust tracking.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值