how hard can it be? Estimating the difficulty of visual search in an image

摘要

作者通过人对一幅画中是否有目标的响应时间定义图像的搜索难度。同时还分析图像属性,如图像尺寸、文件大小、边缘长度,超像素个数对视觉搜索难度产生的影响。文中采用卷积神经网络和回归模型对图像搜索难度进行预测,与ground-truth对比,取得较好效果。模型正确排名75%图对(这个后面说明与采用的评估方法有关)。模型同时对未出现在训练集的新类有很好的泛化效果。文中实验部分使用该模型用于弱监督目标定位(8%性能提升)和半监督目标分类(1%的性能提升)。

2.人如何量化图像目标搜索难度

文中定义目标搜索难度:人需要确定图像中目标是否存在的响应时间。响应时间依赖于图像中的以下因素:图像中与目标不相关的物体个数、目标个数、目标尺寸、目标位置、类标签的类别、目标在其中的上下文关系。基于上面这些因素作者选择PASCAL VOC 2012数据集(包含20类,每张图片标记了目标的bounding boxes)。下面会具体介绍如何获得搜索目标人的响应时间.

2.1测量响应时间做法:
1. 提问标记者这张图中是否存在某个类
2. 向标记者显示图片
3. 记录从显示图片至回答“Yes” or "No"花费的时间

该时间就是该图目标搜索难度值。图片标签使用736人标记,这些人回答准确率都在90%以上.

数据后处理和数据清洗:
每张图像有六个标签,由不同的标记者生成,剔除响应时间大于20s,减均值,除标准差,归一化。若标记者标记个数少于3个,剔除该标记者的标签,因为没有代表性。若标记者标记个数少于10个且平均响应时间大于10s,剔除该标记者的标签。剩余每张图像的标签可能少于6个,可能还存在错误的标记,考虑到标记者回答准确率都在90%以上,说明这张图像确实搜索难度较大,增加错误的标记惩罚,增加困难分数。剔除异常值后在对每张图片去几何平均值(通常算数平均值大于几何平均值)。

评估标记者标记的一致性:
选取56张图片,58名可信任标记者(标记准确率大于90%)。假设X表示其中一个标记者生成的标签包含N = 56张图片的标签、Y表示所有标记者标记均值,第i(1<=i<=N)个值分别用Xi、Yi表示。X与Y中的对应元素组成一个元素对集合XY,其包含的元素为(Xi, Yi)(1<=i<=N)。当集合XY中任意两个元素(Xi, Yi)与(Xj, Yj)的排行相同时,若出现情况1或2时(情况1:Xi>Xj且Yi>Yj,情况2:Xi < Xj且 Yi < Yj),这两个元素就被认为是一致的。当出现情况3或4时(情况3:Xi>Xj且Yi < Yj,情况4:Xi < Xj且Yi > Yj),这两个元素被认为是不一致的。当出现情况5或6时(情况5:Xi=Xj,情况6:Yi=Yj),这两个元素既不是一致的也不是不一致的。下面公式C表示符合情况1或者情况2图像对个数,D表示符合情况3或者情况4图像对个数。 Kenddallτ 可以评估两变量之间的一致性。

Kenddallτ=CDN(N1)

下表 Kenddallτ 均值0.562说明标记者标记的图像对与所有标记者的标记均值,80%的图像对是一致的。作者评估这个参数为了表明标记者的标签是可靠。

-MeanMinimumMaximum
Kendall τ 0.562 ± 0.1270.1820.818
2.2 评估构成图像搜索难度的因素

给定PASCAL VOC 2012数据集(包含20类,每张图片标记了目标的bounding boxes以及视角,及人工标记的搜索困难分数)。作者评估了如下表图像属性与搜索难度的相关性。表格最后一项作者综合所有属性训练Support Vector Regression( ν - SVR)1,回归值与ground-truth最一致,即 Kenddallτ 是最高的。

Image property Kenddallτ
(i) number of objects0.32
(ii) mean area covered by objects−0.28
(iii) non-centeredness0.29
(iv) number of different classes0.33
(v) number of truncated objects0.22
(vi) number of occluded objects0.26
(vii) number of difficult objects0.20
(viii) combine (i) to (vii) with ν-SVR0.36
2.3 评估类标签的搜索难度

作者统计了相同类的搜索困难分数,与当前性能最好深度卷积网络2进行比较,说明人与机器判断分歧不是很大。

ClassScoremAPClassScoremAP
bird3.08192.5%bicycle3.41490.4%
cat3.13391.9%boat3.44189.6%
aeroplane3.15595.3%car3.46391.5%
dog3.20889.7%bus3.50481.9%
horse3.24492.2%sofa3.54268.0%
sheep3.24582.9%bottle3.55054.4%
cow3.28276.3%tv monitor3.57074.4%
motorbike3.35586.9%dining table3.57174.9%
train3.36095.5%chair3.58364.1%
person3.39895.2%potted plant3.64160.7%

3.建模预测图像目标搜索难度

3.1回归模型

作者选用两个深度网络3,VGG-f4, VGG-verydeep-165提取特征,然后用Support Vector Regression( ν - SVR)6或者KRR7拟合ground-truth.

3.2 Baselines
3.3 实验分析

这里写图片描述

ModelMSE Kenddallτ
Random scores0.4580.002
Image area-0.052
Image file size-0.106
Objectness [1, 2]-0.238
Edge strengths [13]-0.240
Number of segments [16]-0.271
Combination with ν-SVR0.2640.299
VGG-f + KRR0.2590.345
VGG-f + ν-SVR0.2360.440
VGG-f + pyramid + ν-SVR0.2340.458
VGG-f + pyramid + flip + ν-SVR0.2330.459
VGG-vd + ν-SVR0.2350.442
VGG-vd + pyramid + ν-SVR0.2320.467
VGG-vd + pyramid + flip + ν-SVR0.2310.468
VGG-f + VGG-vd + pyramid + flip + ν-SVR0.2310.472

NOTE: Objectness 表示一张图里面有多个采样窗口,每个窗口通过objectness measure得到objectness score,通过求和所有窗口objectness score得到一张图片的搜索困难分数。

分析结论:

  1. 中层特征(如Objectness、Edge strengths、Number of segments)相比于低层特征(Image area、Image file size),在图像搜索难度上有更强的相关性。
  2. 结合深度网络预测的分数与ground-truth计算 Kenddallτ 相比baseline的 Kenddallτ 都高。
  3. 最后一项结合两个深度网络、金字塔、图像水平镜像、Support Vector Regression( ν - SVR)8算出的 Kenddallτ 达到0.434,即72%图像对是标记对的, 接近于人工标记 Kenddallτ 的平均值0.562(即80%图像标记正确)。
  4. 跨类预测 Kenddallτ 达到0.427与未跨类预测 Kenddallτ 值0.472相比,两者差距较小,说明特征具有鲁棒性。

4.应用

4.1弱监督目标定位

主要想法:与传统的MIL不同的,该实验首先评估图像的搜索困难分数,用较容易识别的图像训练SVM,逐渐添加难度较大的图片,作者把这套模型称为Easy-to-Hard MIL.(弱监督是不知道图片中目标位置信息)

实验流程:用到VGG-f + VGG-vd + pyramid + flip + ν-SVR模型用于评估一幅图像的搜索困难分数,根据分数把数据集分成3份,每份迭代三次。首先提取图像的proposal,然后在PASCAL VOC 2012数据集上用DCNN(ILSVRC预训练好)提取倒二层的特征训练线性SVM。

ModelIteration 1Iteration 2Iteration 3Iteration 4Iteration 5Iteration 6Iteration 7Iteration 8Iteration 9
Standard MIL26.5%29.9%31.8%32.7%33.3%33.6%33.9%34.3%34.4%
Easy-to-Hard MIL31.1%36.1%36.8%38.9%40.1%40.8%42.1%42.4%42.8%

​ Table 5. CorLoc results for standard MIL versus Easy-to-Hard MIL.

Q:这部分不是很理解对比传统方法为什么能提升+8.4%的性能?

4.2半监督目标分类

所谓的半监督是只有图像中是否存在某个类,并没有位置信息。

试验流程:模型同样选用CNN提取特征,线性SVM分类。数据集分成三份:一份标记的训练集,一份未标记的训练集,一份未标记的测试集。使用启发式算法从未标记数据集逐渐增加到标记的训练集。

启发式策略:

(i)RAND:随机挑选样本.

(ii)GTdifficulty:在grouth-truth里选择前k个搜索困难分数较低的样本.

(iii)PRdifficulty:根据模型评分搜索困难分数选择前k个分数较低的样本.

(iv)HIconfidence:选取离超平面最远的前k个样本.

(v)LIconfidence:选取离超平面最近的前k个样本.

(vi)LOconfidence+PRdifficulty:选取离差超平面最近的前K个样本,根据搜索困难评分在选取前k个分数较低的样本.

结果分析:

  1. 随机挑选启发式到时性能下降.
  2. 根据搜索困难分数的启发式可以提高超平面的分类准确率.
    这里写图片描述

  1. J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
  2. K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In Proceedings ofBMVC, 2014.
  3. K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman.Return of the devil in the details: Delving deep into convolutional nets. In Proceedings of BMVC, 2014.
  4. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR,abs/1409.1556, 2014.
  5. J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
  6. J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004
  7. J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
With the rapid development of China's economy, the per capita share of cars has rapidly increased, bringing great convenience to people's lives. However, with it came a huge number of traffic accidents. A statistical data from Europe shows that if a warning can be issued to drivers 0.5 seconds before an accident occurs, 70% of traffic accidents can be avoided. Therefore, it is particularly important to promptly remind drivers of potential dangers to prevent traffic accidents from occurring. The purpose of this question is to construct a machine vision based driving assistance system based on machine vision, providing driving assistance for drivers during daytime driving. The main function of the system is to achieve visual recognition of pedestrians and traffic signs, estimate the distance from the vehicle in front, and issue a warning to the driver when needed. This driving assistance system can effectively reduce the probability of traffic accidents and ensure the safety of drivers' lives and property. The main research content of this article includes the following aspects: 1. Implement object detection based on the YOLOv5 model. Conduct research on convolutional neural networks and YOLOv5 algorithm, and develop an object detection algorithm based on YOLO5. Detect the algorithm through road images, and analyze the target detection algorithm based on the data returned after training. 2. Estimate the distance from the front vehicle based on a monocular camera. Study the principle of estimating distance with a monocular camera, combined with parameters fed back by object detection algorithms, to achieve distance estimation for vehicles ahead. Finally, the distance estimation function was tested and the error in the system's distance estimation was analyzed. 3. Design and implementation of a driving assistance system. Based on the results of two parts: target detection and distance estimation, an intelligent driving assistance system is constructed. The system is tested through actual road images, and the operational effectiveness of the intelligent driving assistance system is analyzed. Finally, the driving assistance system is analyzed and summarized.
最新发布
06-03
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值