数字图像处理作业1.1 基于颜色空间的人脸皮肤图像分割

声明:图片来源于网络 仅用于学术科研学习用途 如有版权诉求 请留言 会立即删除博文

首先读入图片,读入方法随意,可以用PIL下的Image,可以直接open读到numpy里面,也可以用openCV,为了下面分割的颜色空间转换方便我用的是openCV的cv2

 

 作业要求将RGB颜色空间下的图片转换为YCrCb颜色空间下进行分割,所以直接使用了openCV的工具,当然自己写个矩阵相乘也可以,我在上面读文件之后已经转化完毕:

 

 YCrCb中的Y表示明亮度,也就是灰阶值,而Cr和Cb分别表示色彩及饱和度,用于指定影像的颜色。其中Cr反映RGB输入信号红色部分与亮度的差异,Cb反映的是RGB输入信号蓝色部分与亮度的差异                    
下面描述肤色分割步骤:         
1.把RGB图像转换到YCrCb空间 并提取Cr分量图像           
2.对Cr分量进行高斯滤波           
3.对Cr做自二值化阈值分割处理OTSU法

 

 上图是对Cr分量进行告诉滤波后的图像

 

 上图是对高斯滤波后的图像使用OTSU进行二值化分割,下面来看使用高斯自适应分割法,对没有经过高斯滤波的Cr原分量进行处理的结果:

 

 最后看看通过统计学规律对CrCb分量的皮肤颜色范围进行二分得到的结果:

 

转载于:https://www.cnblogs.com/NWNU-LHY/p/11458605.html

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Human parsing has been extensively studied recently (Yamaguchi et al. 2012; Xia et al. 2017) due to its wide applications in many important scenarios. Mainstream fashion parsing models (i.e., parsers) focus on parsing the high-resolution and clean images. However, directly applying the parsers trained on benchmarks of high-quality samples to a particular application scenario in the wild, e.g., a canteen, airport or workplace, often gives non-satisfactory performance due to domain shift. In this paper, we explore a new and challenging cross-domain human parsing problem: taking the benchmark dataset with extensive pixel-wise labeling as the source domain, how to obtain a satisfactory parser on a new target domain without requiring any additional manual labeling? To this end, we propose a novel and efficient crossdomain human parsing model to bridge the cross-domain differences in terms of visual appearance and environment conditions and fully exploit commonalities across domains. Our proposed model explicitly learns a feature compensation network, which is specialized for mitigating the cross-domain differences. A discriminative feature adversarial network is introduced to supervise the feature compensation to effectively reduces the discrepancy between feature distributions of two domains. Besides, our proposed model also introduces a structured label adversarial network to guide the parsing results of the target domain to follow the high-order relationships of the structured labels shared across domains. The proposed framework is end-to-end trainable, practical and scalable in real applications. Extensive experiments are conducted where LIP dataset is the source domain and 4 different datasets including surveillance videos, movies and runway shows without any annotations, are evaluated as target domains. The results consistently confirm data efficiency and performance advantages of the proposed method for the challenging cross-domain human parsing problem. Abstract—This paper presents a robust Joint Discriminative appearance model based Tracking method using online random forests and mid-level feature (superpixels). To achieve superpixel- wise discriminative ability, we propose a joint appearance model that consists of two random forest based models, i.e., the Background-Target discriminative Model (BTM) and Distractor- Target discriminative Model (DTM). More specifically, the BTM effectively learns discriminative information between the target object and background. In contrast, the DTM is used to suppress distracting superpixels which significantly improves the tracker’s robustness and alleviates the drifting problem. A novel online random forest regression algorithm is proposed to build the two models. The BTM and DTM are linearly combined into a joint model to compute a confidence map. Tracking results are estimated using the confidence map, where the position and scale of the target are estimated orderly. Furthermore, we design a model updating strategy to adapt the appearance changes over time by discarding degraded trees of the BTM and DTM and initializing new trees as replacements. We test the proposed tracking method on two large tracking benchmarks, the CVPR2013 tracking benchmark and VOT2014 tracking challenge. Experimental results show that the tracker runs at real-time speed and achieves favorable tracking performance compared with the state-of-the-art methods. The results also sug- gest that the DTM improves tracking performance significantly and plays an important role in robust tracking.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值