倒角距离匹配

ChamferMatching


Chamfer Matching and Distance Transform

Project summary

This project is to use Chamfer matching to find the head of human in the kinect depth image.

Main steps including:

(1) Estimate the size of the head of person in the image as a circle and create such a circle.

(2) Run Canny edge detection algorithm to generate the edge images.

(3) Use distance transform (fast sweeping method) to get distance image.

(4) Use a modified version of chamfer matching which is resistant to clutter, to locate the head.

 

Algorithm description

Step 1, build template

The radius of head is estimated to be 27, so a template whose size is slightly larger than the diameter is used.

The following code create the template in which only the circle boundary is 1.

template = double(abs(x.^2 + y.^2-radius^2) < cw);

Fig.1. Circle template.

Step 2, edge detection

A canny edge detector with 0.02 threshold and 1.4 sigma is used.

[edge_map,thresh] = edge(I,'canny', 0.02,1.4);


Fig. 2. Edge image for ‘Depth1’ and ‘Depth2’

Step 2, distance transform

Fast sweeping method is used to get the distance image. Details about this method can be found in the reference article attached1. The core part of this method is that the unsigned distance field function is the solution of equation:


The fast sweeping method sweeps 4 times along different directions (left down, right down, left up, right up), to finish a calculation of the distance field.

Speed Issue:

When find the minimum of two quantities, the matlab function min is very slow, so it is replaced by compare and switch, e.g.:

 uxmin=min([u(i-1,j),u(i+1,j)]);

Is replaced by:

        if u(i-1,j)<u(i+1,j)

            uxmin=u(i-1,j);

        else

            uxmin=u(i+1,j);

        end

This reduced the execution time from 9 seconds to 0.3 seconds.

Fig. 3. Distance image of ‘Depth1’ and ‘Depth2’. The left down corner of ‘Depth2’ has certain background clutter.

 

Step 4, chamfer matching

Normal chamfer matching:

Chamfer matching: find placement of template in an image that minimizes the sum, M, of the distance transform multiplied by the pixel values in template.2

This can be implemented as find the minimum of the convolution of the template and distance image.

C = conv2(dt_euclidean,template,'valid');

      [ColumnMin, Y]= min(C);

[Gmin, X]= min(ColumnMin);

min_x = X

min_y = Y(X)

Here ‘valid’ mode is used, so the there’s no zero-padding, and invalid boundary is discarded.

Fig. 4. Normal chamfer matching. It works on ‘Depth1’, but in ‘Depth2’ false positive is obtained in the background clutter, where the distance field is small throughout the region.

 

Modified chamfer matching:

To get rid of the false positive in background clutter, I noticed that there’s a difference between real match position and false match position; for a real match, the regions inside the template are of large distance, while for a background clutter, all regions are of small distance, as shown in Fig. 5.

Fig. 5. Illustration of modified chamfer matching.

 

So I use another solid circle template, and have another convolution with the distance image:

template2 = double(x.^2+y.^2<r2);

% template for background clutter removal

C2 = conv2(dt_euclidean,template2,'valid');

In the obtained error threshold image C2, as shown in Fig.6, the background clutter will have very small value, so a threshold is used to rule out the background region. In other words, a real match position should have a C2 value large than Cthres:

Cthres = (2 * tplt_sz + 1)^2 * 3;

Here the Cthres have an average value of 3.

Fig. 6. Error threshold image.

Then in C, all these false regions are set as maximum, as shown in Fig. 7.

C(C2<Cthres) = max(max(C));

Then the minimum of this modified C will give the real match position:

Fig. 7. Modified chamfer matching. It works on both images.

Result

The final match result is shown in Fig. 8.

Fig. 8. Final result of the modified chamfer matching. It works on both images.

Matlab source code description

ChamferMatch.m                   main function for chamfer match

dist_FC.lm                              fast sweeping method for distance transform

 

To execute the main function, call:

ChamferMatch('Depth2.png')

ChamferMatch('Depth2.png')

Reference

1.     Zhao, Hongkai. "A fast sweeping method for eikonal equations." Mathematics of computation 74.250 (2005): 603-627.

2.     Tony X. Han, “ImageProc_16_Matching.ppt” from http://web.missouri.edu/~hantx/img_proc/

Č
ċ
ChamferMatching.7z 
(1394k)
Miao Zhang, 
Dec 6, 2013, 12:33 PM
v.1
ď
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
PyTorch3D 是一个用于深度学习的开源库,专门用于处理三维计算机视觉任务。其中的倒角距离教程是一种用于计算三维对象之间的距离的技术,主要用于形状匹配和检索任务中。 首先,倒角距离是一种常见的形状相似度度量方法,用于比较两个三维对象之间的相似程度。在PyTorch3D中,倒角距离的计算可以使用`chamfer_distance`函数来实现。这个函数接受两个输入,分别是两个三维对象的点云表示。点云是由一系列三维点组成的集合,类似于点的集合。 在计算倒角距离之前,需要对输入的点云进行预处理。通常的预处理步骤包括对点云进行采样、归一化和对齐等操作。在PyTorch3D中,可以使用`PointCloud`和`TransformPoints`模块来实现这些操作。 一旦完成了预处理步骤,就可以使用`chamfer_distance`函数来计算倒角距离。这个函数返回两个点云之间的最小距离和最大距离。最小距离表示两个点云之间最近的点之间的距离之和,而最大距离则表示两个点云之间最远的点之间的距离之和。 在使用PyTorch3D的倒角距离教程的过程中,我们还可以使用其他一些技巧来提高计算的效率。例如,可以使用GPU来加速计算,并且可以将计算过程并行化,以便同时处理多个点云。 总之,PyTorch3D的倒角距离教程为处理三维形状匹配和检索任务提供了一个方便而高效的工具,可以帮助我们比较和量化三维对象之间的相似度。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值