基于 SIFT 对图像进行局部特征匹配(Matlab代码实现)

💥💥💞💞欢迎来到本博客❤️❤️💥💥

🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。

⛳️座右铭:行百里者,半于九十。

📋📋📋本文目录如下:🎁🎁🎁

目录

💥1 概述

📚2 运行结果

🎉3 参考文献

🌈4 Matlab代码实现


💥1 概述

为了提高特征点匹配的准确率,提出了一种基于改进混合滤波、特征描述符降维、SIFT特征匹配、RANSAC剔除误匹配点以及PSO算法的特征点匹配。首先将场景图像进行滤波处理达到去噪效果,然后通过特征描述符降维以减少计算量,再通过RANSAC对基于SIFT的特征点匹配进行误匹配的剔除,最后使用PSO算法进行优化以寻找到最佳的Ratio值。通过在模糊、较暗、较亮和遮挡4种以机械手为背景的场景下的图像,进行4种算法的对比实验,最后表明SIFT特征匹配算法的误匹配率最小,精确度最高。在本文章中,我们实现了哈里斯角检测器来获取与角像素相对应的兴趣点。设计用于检测图像多个比例的角落。实现了 SIFT 算法,用于获取之前找到的角点的局部特征描述符。每个角点都使用其周围图像块的梯度直方图(HoG)进行描述。使用最近距离匹配实现特征匹配,并使用 k-d 树实现 KNN 搜索。

📚2 运行结果

 部分代码:

% Local Feature Stencil Code
% CS 4476 / 6476: Computer Vision, Georgia Tech
% Written by James Hays

% This script 
% (1) Loads and resizes images
% (2) Finds interest points in those images                 (you code this)
% (3) Describes each interest point with a local feature    (you code this)
% (4) Finds matching features                               (you code this)
% (5) Visualizes the matches
% (6) Evaluates the matches based on ground truth correspondences
tic
close all

%% 1) Load stuff
% There are numerous other image sets in the supplementary data on the
% project web page. You can simply download images off the Internet, as
% well. However, the evaluation function at the bottom of this script will
% only work for three particular image pairs (unless you add ground truth
% annotations for other image pairs). It is suggested that you only work
% with the two Notre Dame images until you are satisfied with your
% implementation and ready to test on additional images. A single scale
% pipeline works fine for these two images (and will give you full credit
% for this project), but you will need local features at multiple scales to
% handle harder cases.


  image1 = imread('../data/Notre Dame/921919841_a30df938f2_o.jpg');
  image2 = imread('../data/Notre Dame/4191453057_c86028ce1f_o.jpg');
  eval_file = '../data/Notre Dame/921919841_a30df938f2_o_to_4191453057_c86028ce1f_o.mat';

 %image1 = imread('../data/agra_fort.jpg');
 %image2 = imresize(image1,0.5);

 
% %This pair is relatively easy (still harder than Notre Dame, though)
% image1 = imread('../data/Mount Rushmore/9021235130_7c2acd9554_o.jpg');
% image2 = imread('../data/Mount Rushmore/9318872612_a255c874fb_o.jpg');
% eval_file = '../data/Mount Rushmore/9021235130_7c2acd9554_o_to_9318872612_a255c874fb_o.mat';

%This pair is relatively difficult
% image1 = imread('../data/Episcopal Gaudi/4386465943_8cf9776378_o.jpg');
% image2 = imread('../data/Episcopal Gaudi/3743214471_1b5bbfda98_o.jpg');
% eval_file = '../data/Episcopal Gaudi/4386465943_8cf9776378_o_to_3743214471_1b5bbfda98_o.mat';

image1 = single(image1)/255;
image2 = single(image2)/255;

%make images smaller to speed up the algorithm. This parameter gets passed
%into the evaluation code so don't resize the images except by changing
%this parameter.
scale_factor = 0.5; 
image1 = imresize(image1, scale_factor, 'bilinear');
image2 = imresize(image2, scale_factor, 'bilinear');

% You don't have to work with grayscale images. Matching with color
% information might be helpful.
image1_bw = rgb2gray(image1);
image2_bw = rgb2gray(image2);

feature_width = 16; %width and height of each local feature, in pixels. 
toc 

%% 2) Find distinctive points in each image. Szeliski 4.1.1
% !!! You will need to implement get_interest_points. !!!

% Harris Corner Detection
% [x1, y1] = get_interest_points(image1_bw, feature_width, 1);
% [x2, y2] = get_interest_points(image2_bw, feature_width, 1);

% % Adaptive non-maximal detection
% [x1, y1] = get_interest_points_anms(image1_bw, feature_width);
% toc
% [x2, y2] = get_interest_points_anms(image2_bw, feature_width);


% % Scaling with harris corner detection
[x1, y1, confidence1, scale1] = get_interest_points_scaling(image1_bw, feature_width);
[x2, y2, confidence2, scale2] = get_interest_points_scaling(image2_bw, feature_width);


toc

%show_correspondence(image1, image2, x1, y1, x2, y2);

% % Use cheat_interest_points only for development and debugging!
% [x1, y1, x2, y2] = cheat_interest_points(eval_file, scale_factor);


%show_correspondence(image1, image2, x1, y1, x2, y2);

%% 3) Create feature vectors at each interest point. Szeliski 4.1.2
% !!! You will need to implement get_features. !!!

% SIFT
% [image1_features] = get_features(image1_bw, x1, y1, feature_width);
% [image2_features] = get_features(image2_bw, x2, y2, feature_width);

% SIFT with scaling
[image1_features] = get_features_scaling(image1_bw, x1, y1, feature_width, scale1);
[image2_features] = get_features_scaling(image2_bw, x2, y2, feature_width, scale2);

toc


%% 4) Match features. Szeliski 4.1.3
% !!! You will need to implement get_features. !!!

% % Exhaustive search
 [matches, confidences] = match_features(image1_features, image2_features);

% % Knnsearch with kdTree
%[matches, confidences] = match_features_knnsearch(image1_features, image2_features);

%matches = matchFeatures(feat1, feat2);
toc

%% 5) Visualization
% You might want to set 'num_pts_to_visualize' and 'num_pts_to_evaluate' to
% some constant (e.g. 100) once you start detecting hundreds of interest
% points, otherwise things might get too cluttered. You could also
% threshold based on confidence.

% There are two visualization functions. You can comment out one of both of
% them if you prefer.
num_pts_to_visualize = size(matches,1);
show_correspondence(image1, image2, x1(matches(1:num_pts_to_visualize,1)), ...
                                    y1(matches(1:num_pts_to_visualize,1)), ...
                                    x2(matches(1:num_pts_to_visualize,2)), ...
                                    y2(matches(1:num_pts_to_visualize,2)));
                                 
show_correspondence2(image1, image2, x1(matches(1:num_pts_to_visualize,1)), ...
                                     y1(matches(1:num_pts_to_visualize,1)), ...
                                     x2(matches(1:num_pts_to_visualize,2)), ...
                                     y2(matches(1:num_pts_to_visualize,2)));

%% 6) Evaluation
% This evaluation function will only work for the Notre Dame, Episcopal
% Gaudi, and Mount Rushmore image pairs. Comment out this function if you
% are not testing on those image pairs. Only those pairs have ground truth
% available. You can use collect_ground_truth_corr.m to build the ground
% truth for other image pairs if you want, but it's very tedious. It would
% be a great service to the class for future years, though!
num_pts_to_evaluate = size(matches,1);
good_matches = evaluate_correspondence(image1, image2, eval_file, scale_factor, ... 
                        x1(matches(1:num_pts_to_evaluate,1)), ...
                        y1(matches(1:num_pts_to_evaluate,1)), ...
                        x2(matches(1:num_pts_to_evaluate,2)), ...
                        y2(matches(1:num_pts_to_evaluate,2)));

🎉3 参考文献

部分理论来源于网络,如有侵权请联系删除。

[1]林曦蕾.图像局部特征匹配算法发展综述[J].现代计算机(专业版),2019(09):89-93.

[2]徐澳,华云松,夏春蕾.,陈诗雨.一种基于SIFT的改进特征点匹配算法[J].软件,2022,43(09):83-86+119.

🌈4 Matlab代码实现

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

荔枝科研社

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值