Matlab-vision包学习-Feature Detection,Extraction and Matching-匹配

这一篇介绍特征匹配的函数。

函数/Functions

函数名称:matchFeatures

功能:两个特征描述子进行匹配

语法: indexPairs = matchFeatures(features1,features2); 

             [indexPairs,matchmetric] = matchFeatures(features1,feature2); 

             [indexPairs,matchmetric] = matchFeatures(features1,feature2,Name,Value); 

其中,features1,features2可以是2-值型的特征描述子对象(binaryFeatures object)或者矩阵;indexPairs为Px2的向量,即匹配上的指标对;matchmetric为匹配上的特征描述子之间的测度值;Name为用一对单引号包含的字符串,Value为对应Name的值。

Name&Value参数
NameValue
'Method'默认值为'NearestNeighborRatio',表示进行匹配时所用方法,详细见“Method及其含义"表
‘MatchThreshold'对于2-值型特征向量,默认值为10.0,对于非2-值型特征向量,默认值为1.0,范围(0,100),为百分比值,表示选择最强的匹配的百分比,取较大值时,返回更多的匹配对
’Metric'仅对输入非2-值型特征向量有用,表示特征匹配的测度,默认值为SSD(差的平方和),还可以取值为SAD(绝对差之和)和normcorr(归一化交叉相关)
注:对于2-值型特征向量,使用Hamming距离作为测度
‘Prenormalized'仅对输入非2-值型特征向量有用,默认值为false,若置为true,表示在进行特征匹配前特征都进行归一化(如果输入的没有进行归一化,匹配结果可能出错),若置为false,函数进行归一化后再进行匹配。
’MaxRatio'默认值为0.6,范围为(0,1],配合‘Method’取‘NearestNeighborRatio'使用,消除匹配模糊

Method及其含义
Method含义
‘Threshold'仅使用匹配阈值,可能导致一个特征有多个匹配特征
’NearestNeighborSymmetric'结合匹配阈值,产生一一对应的匹配结果
‘NearestNeighborRatio'结合匹配阈值消除匹配模糊(匹配模糊定义为:如果第一个匹配的特征不能明显优于第二个匹配特征时),那么如下定义额比值测试(the ratio test)用于决策:
1. 计算features1和feaures2中任意两个特征之间的最近距离: D
2. 计算feature1中相同特征到feature2特征空间之间的第二近距离: d
3.如果两个距离的比值D/d大于MaxRaio时,可以消除匹配模糊。
注:这个方法可以产生更加鲁棒性的匹配,同时,如果图像中存在重复模式(repeating patterns),该方法为了消除匹配模糊,可能会消除合理的匹配特征

举例:


close all; 
clear all; 
clc; 

I1 = rgb2gray(imread('viprectification_deskLeft.png')); 
I2 = rgb2gray(imread('viprectification_deskRight.png'));

points1 = detectHarrisFeatures(I1); 
points2 = detectHarrisFeatures(I2); 

[features1,valid_points1] = extractFeatures(I1,points1); 
[features2,valid_points2] = extractFeatures(I2,points2); 

indexPairs = matchFeatures(features1,features2); 

matchedPoints1 = valid_points1(indexPairs(:,1)); 
matchedPoints2 = valid_points2(indexPairs(:,2)); 

figure; 
showMatchedFeatures(I1,I2,matchedPoints1,matchedPoints2); 


Object detection in remote sensing images is a challenging task due to the complex backgrounds, diverse object shapes and sizes, and varying imaging conditions. To address these challenges, fine-grained feature enhancement can be employed to improve object detection accuracy. Fine-grained feature enhancement is a technique that extracts and enhances features at multiple scales and resolutions to capture fine details of objects. This technique includes two main steps: feature extraction and feature enhancement. In the feature extraction step, convolutional neural networks (CNNs) are used to extract features from the input image. The extracted features are then fed into a feature enhancement module, which enhances the features by incorporating contextual information and fine-grained details. The feature enhancement module employs a multi-scale feature fusion technique to combine features at different scales and resolutions. This technique helps to capture fine details of objects and improve the accuracy of object detection. To evaluate the effectiveness of fine-grained feature enhancement for object detection in remote sensing images, experiments were conducted on two datasets: the NWPU-RESISC45 dataset and the DOTA dataset. The experimental results demonstrate that fine-grained feature enhancement can significantly improve the accuracy of object detection in remote sensing images. The proposed method outperforms state-of-the-art object detection methods on both datasets. In conclusion, fine-grained feature enhancement is an effective technique to improve the accuracy of object detection in remote sensing images. This technique can be applied to a wide range of applications, such as urban planning, disaster management, and environmental monitoring.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值