特征匹配的一些随意记录

http://www.nlpr.ia.ac.cn/fanbin/

https://dblp.uni-trier.de/pid/60/105.html

特征匹配评估:

http://www.nlpr.ia.ac.cn/fanbin/code/code_feature_evaluation.rar

https://www.slideserve.com/zola/local-invariant-feature-descriptors

书籍:

Local Image Descriptor: Modern Approaches

 

视频:cvpr2017

Tutorial: Local Feature Extraction and Learning for Computer Vision

 

会议:cvpr2017

http://www.nlpr.ia.ac.cn/fanbin/CVPR17Tutorial_LocalFeature.htm

 

Introduction and Classical Methods by Pascal Fua. [Part I]

http://www.nlpr.ia.ac.cn/fanbin/slides/CVPR17%20Tutorial%20on%20Local%20Feature-Part%20I.pdf

Modern Descriptors for High Matching Performance by Bin Fan. [Part II]

http://www.nlpr.ia.ac.cn/fanbin/slides/CVPR17%20Tutorial%20on%20Local%20Feature-Part%20II.pdf

Learning High Efficient Binary Features and Its Applications by Jiwen Lu. [Part III]

http://www.nlpr.ia.ac.cn/fanbin/slides/CVPR17%20Tutorial%20on%20Local%20Feature-Part%20III.pdf

 

http://www.nlpr.ia.ac.cn/fanbin/[CCF-CV@Xiangtan%20University]BinFan.pdf

2017中文ppt:

 

2014年ppt:

http://www.nlpr.ia.ac.cn/fanbin/Local%20Feature%20Descriptors_VALSE14.pptx

有视频应该,VALSE的

https://space.bilibili.com/562085182/video?tid=0&page=1&keyword=&order=pubdate

https://www.iqiyi.com/u/2289191062/videos

每一期ppt列表:

http://valser.org/webinar/slide/

 

【CAA五四青年学术论坛】基于深度学习的三维点云智能分析,北京科技大学自动化学院樊彬老师精彩讲座

https://www.bilibili.com/video/BV1AC4y1p7Xh?from=search&seid=15344037378330539731

 

 

 

Local Feature Descriptor for Image Matching: A Survey

 

Local feature descriptors


Gradient-base methods
基于梯度的方法
1)D. G. Lowe, ‘‘Distinctive image features from scale-invariant keypoints,’’
Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004.
2)K. Mikolajczyk and C. Schmid, ‘‘A performance evaluation of local
descriptors,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 10,
pp. 1615–1630, Oct. 2005.
3)Y. Ke and R. Sukthankar, ‘‘PCA-SIFT: A more distinctive representation
for local image descriptors,’’ in Proc. IEEE Comput. Vis. Pattern Recog-
nit., vol. 2, Jun./Jul. 2004, pp. II-506–II-513.
4)H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, ‘‘Speeded-up robust
features (SURF),’’ Comput. Vis. Image Understand., vol. 110, no. 3,
pp. 346–359, 2008.
5)D. Weng, Y. Wang, M. Gong, D. Tao, H. Wei, and D. Huang, ‘‘DERF:
Distinctive efficient robust features from the biological modeling of
the P ganglion cells,’’ IEEE Trans. Image Process., vol. 24, no. 8,
pp. 2287–2302, Aug. 2015.
6)Y. Pang, W. Li, Y. Yuan, and J. Pan, ‘‘Fully affine invariant SURF for
image matching,’’ Neurocomputing, vol. 85, pp. 6–10, May 2012.
7)M. Lourenco, J. P. Barreto, and F. Vasconcelos, ‘‘sRD-SIFT: Keypoint
detection and matching in images with radial distortion,’’ IEEE Trans.
Robot., vol. 28, no. 3, pp. 752–760, Jun. 2012.
8)S. Saleem and R. Sablatnig, ‘‘A robust SIFT descriptor for multispec-
tral images,’’ IEEE Signal Process. Lett., vol. 21, no. 4, pp. 400–403,
Apr. 2014.
9)C.-C. Chen and S.-L. Hsieh, ‘‘Using binarization and hashing for efficient
SIFT matching,’’ J. Vis. Commun. Image Represent., vol. 30, pp. 86–93,
Jul. 2015.
10)Q. Li, G. Wang, J. Liu, and S. Chen, ‘‘Robust scale-invariant feature
matching for remote sensing image registration,’’ IEEE Geosci. Remote
Sens. Lett., vol. 6, no. 2, pp. 287–291, Apr. 2009.
11)E. Tola, V. Lepetit, and P. Fua, ‘‘DAISY: An efficient dense descriptor
applied to wide-baseline stereo,’’ IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 32, no. 5, pp. 815–830, May 2010.
12)Q. Shi, G. Ma, F. Zhang, W. Chen, Q. Qin, and H. Duo, ‘‘Robust image
registration using structure features,’’ IEEE Geosci. Remote Sens. Lett.,
vol. 11, no. 12, pp. 2045–2049, Dec. 2014.
13)C. Cui and K. N. Ngan, ‘‘Scale- and affine-invariant fan feature,’’ IEEE
Trans. Image Process., vol. 20, no. 6, pp. 1627–1640, Jun. 2011.
14)X. Su, W. Lin, X. Zheng, X. Han, H. Chu, and X. Zhang, ‘‘A new local-
main-gradient-orientation HOG and contour differences based algorithm
for object classification,’’ in Proc. IEEE Int. Symp. Circuits Syst. (ISCAS),
May 2013, pp. 2892–2895.
15)K. Huang et al., ‘‘Improved human head and shoulder detection with local
main gradient and tracklets-based feature,’’ in Proc. Asia-Pacific Signal
Inf. Process. Assoc. Annu. Summit Conf. (APSIPA), Dec. 2014, pp. 1–4.
16)J. Baber, M. N. Dailey, S. Satoh, N. Afzulpurkar, and M. Bakhtyar,
‘‘BIG-OH: BInarization of gradient orientation histograms,’’ Image Vis.
Comput., vol. 32, no. 11, pp. 940–953, Nov. 2014.
17)D. Huang, C. Zhu, Y. Wang, and L. Chen, ‘‘HSOG: A novel local image
descriptor based on histograms of the second-order gradients,’’ IEEE
Trans. Image Process., vol. 23, no. 11, pp. 4680–4695, Nov. 2014.
18)L. Xie, J. Wang, W. Lin, B. Zhang, and Q. Tian, ‘‘RIDE: Reversal
invariant descriptor enhancement,’’ in Proc. IEEE Int. Conf. Comput.
Vis. (ICCV), Dec. 2015, pp. 100–108.
19)L. Xie, J. Wang, W. Lin, B. Zhang, and Q. Tian, ‘‘Towards reversal-
invariant image representation,’’ Int. J. Comput. Vis., vol. 123, no. 2,
pp. 226–250, 2017.
20)A. Sedaghat and H. Ebadi, ‘‘Remote sensing image matching based on
adaptive binning SIFT descriptor,’’ IEEE Trans. Geosci. Remote Sens.,
vol. 53, no. 10, pp. 5283–5293, Oct. 2015.
21)F. Dellinger, J. Delon, Y. Gousseau, J. Michel, and F. Tupin, ‘‘SAR-SIFT:
A SIFT-like algorithm for SAR images,’’ IEEE Trans. Geosci. Remote
Sens., vol. 53, no. 1, pp. 453–466, Jan. 2015.
22)Y. Xiang, F. Wang, and H. You, ‘‘OS-SIFT: A robust SIFT-like algo-
rithm for high-resolution optical-to-SAR image registration in suburban
areas,’’ IEEE Trans. Geosci. Remote Sens., vol. 56, no. 6, pp. 3078–3090,
Jun. 2018.


Intensity-base methods
基于强度的方法
1)C. Strecha, A. M. Bronstein, M. M. Bronstein, and P. Fua, ‘‘LDAHash:
Improved matching with smaller descriptors,’’ IEEE Trans. Pattern Anal.
Mach. Intell., vol. 34, no. 1, pp. 66–78, Jan. 2012.
2)B. Fan, F. Wu, and Z. Hu, ‘‘Rotationally invariant descriptors using
intensity order pooling,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 34,
no. 10, pp. 2031–2045, Oct. 2012.]
3)J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine,
‘‘A partial intensity invariant feature descriptor for multimodal reti-
nal image registration,’’ IEEE Trans. Biomed. Eng., vol. 57, no. 7,
pp. 1707–1718, Jul. 2010.
4)B. Kim, H. Yoo, and K. Sohn, ‘‘Exact order based feature descriptor for
illumination robust image matching,’’ Pattern Recognit., vol. 46, no. 12,
pp. 3268–3278, 2013.
5)T. Ojala, M. Pietikäinen, and D. Harwood, ‘‘A comparative study of
texture measures with classification based on featured distributions,’’
Pattern Recognit., vol. 29, no. 1, pp. 51–59, 1996.
6)T. Ojala, M. Pietikäinen, and T. Mäenpää, ‘‘Multiresolution gray-scale
and rotation invariant texture classification with local binary patterns,’’
IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 971–987,
Jul. 2002.


Spatial frequency-based methods
基于空间频率的方法
1)S. Belongie, J. Malik, and J. Puzicha, ‘‘Shape matching and object recog-
nition using shape contexts,’’ IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 24, no. 4, pp. 509–522, Apr. 2002.
2)Y. Li, Y. Zhang, X. Huang, H. Zhu, and J. Ma, ‘‘Large-scale remote
sensing image retrieval by deep hashing neural networks,’’ IEEE Trans.
Geosci. Remote Sens., vol. 56, no. 2, pp. 950–965, Feb. 2018.



Moment and probability-based method
基于矩和概率的方法



Learning-based methods
基于学习的方法
1)T. Trzcinski, M. Christoudias, and V. Lepetit, ‘‘Learning image descrip-
tors with boosting,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 37,
no. 3, pp. 597–610, Mar. 2015.
2)K. Simonyan, A. Vedaldi, and A. Zisserman, ‘‘Learning local feature
descriptors using convex optimisation,’’ IEEE Trans. Pattern Anal. Mach.
Intell., vol. 36, no. 8, pp. 1573–1585, Aug. 2014.
3)M. Brown, G. Hua, and S. Winder, ‘‘Discriminative learning of local
image descriptors,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 33,
no. 1, pp. 43–57, Jan. 2011.
4)L. Shao, L. Liu, and X. Li, ‘‘Feature learning for image
classification via multiobjective genetic programming,’’ IEEE
Trans. Neural Netw. Learn. Syst., vol. 25, no. 7, pp. 1359–1371,
Jul. 2014.
5)Z. Feng, J. Lai, and X. Xie, ‘‘Learning view-specific deep networks for
person re-identification,’’ IEEE Trans. Image Process., vol. 27, no. 7,
pp. 3472–3483, Jul. 2018.
6)L. Shao, D. Wu, and X. Li, ‘‘Learning deep and wide: A spectral method
for learning deep networks,’’ IEEE Trans. Neural Netw. Learn. Syst.,
vol. 25, no. 12, pp. 2303–2308, Dec. 2014.
7)Y. Yuan, L. Mou, and X. Lu, ‘‘Scene recognition by manifold regularized
deep learning architecture,’’ IEEE Trans. Neural Netw. Learn. Syst.,
vol. 26, no. 10, pp. 2222–2233, Oct. 2015.
8)L. Liu, L. Shao, X. Li, and K. Lu, ‘‘Learning spatio-temporal represen-
tations for action recognition: A genetic programming approach,’’ IEEE
Trans. Cybern., vol. 46, no. 1, pp. 158–170, Jan. 2016.
9)G. Wu, M. Kim, Q. Wang, B. C. Munsell, and D. Shen, ‘‘Scalable high-
performance image registration framework by unsupervised deep feature
representations learning,’’ IEEE Trans. Biomed. Eng., vol. 63, no. 7,
pp. 1505–1516, Jul. 2016.



Convolutional neural network-based methods
基于卷积神经网络的方法
1)A. Krizhevsky, I. Sutskever, and G. E. Hinton, ‘‘ImageNet classification
with deep convolutional neural networks,’’ Adv. Neural Inf. Process. Syst.,
2012, pp. 1097–1105
2)P. Fischer, A. Dosovitskiy, and T. Brox. (2014). ‘‘Descriptor matching
with convolutional neural networks: A comparison to SIFT.’’ [Online].
Available: https://arxiv.org/abs/1405.5769
3)Y. Gong, L. Wang, R. Guo, and S. Lazebnik, ‘‘Multi-scale orderless
pooling of deep convolutional activation features,’’ in Proc. Eur. Conf.
Comput. Vis., 2014, pp. 392–407.
4)S. Zagoruyko and N. Komodakis, ‘‘Learning to compare image patches
via convolutional neural networks,’’ in Proc. IEEE Conf. Comput. Vis.
Pattern Recognit., Jun. 2015, pp. 4353–4361.
5)E. Simo-Serra, E. Trulls, L. Ferraz, I. Kokkinos, P. Fua, and
F. Moreno-Noguer, ‘‘Discriminative learning of deep convolutional fea-
ture point descriptors,’’ in Proc. IEEE Int. Conf. Comput. Vis., Dec. 2016,
pp. 118–126.
6)Y. Tian, B. Fan, and F. Wu, ‘‘L2-Net: Deep learning of discriminative
patch descriptor in Euclidean space,’’ in Proc. IEEE Conf. Comput. Vis.
Pattern Recognit., Jul. 2017, pp. 6128–6136.
7)K. Nguyen, C. Fookes, A. Ross, and S. Sridharan, ‘‘Iris recognition with
off-the-shelf CNN features: A deep learning perspective,’’ IEEE Access,
vol. 6, pp. 18848–18855, 2018.
8)G. Cheng, P. Zhou, and J. Han, ‘‘Learning rotation-invariant convo-
lutional neural networks for object detection in VHR optical remote
sensing images,’’ IEEE Trans. Geosci. Remote Sens., vol. 54, no. 12,
pp. 7405–7415, Dec. 2016.
9)J. Y. Ma and J. Zhao, ‘‘Robust topological navigation via convolutional
neural network feature and sharpness measure,’’ IEEE Access, vol. 5,
no. 99, pp. 20707–20715, 2017.
10)W. Luo, J. Li, J. Yang, W. Xu, and J. Zhang, ‘‘Convolutional sparse
autoencoders for image classification,’’ IEEE Trans. Neural Netw. Learn.
Syst., vol. 29, no. 7, pp. 3289–3294, Jul. 2018.
11)K. Simonyan and A. Zisserman, ‘‘Very deep convolutional networks for
large-scale image recognition,’’ in Proc. Int. Conf. Learn. Represent.,
2015.
12)C. Szegedy et al., ‘‘Going deeper with convolutions,’’ in Proc. IEEE Conf.
Comput. Vis. Pattern Recognit., Jun. 2015, pp. 1–9.
13)K. He, X. Zhang, S. Ren, and J. Sun, ‘‘Deep residual learning for
image recognition,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.,
Jun. 2016, pp. 770–778.
14)L. Zheng, Y. Yang, and Q. Tian, ‘‘SIFT meets CNN: A decade survey of
instance retrieval,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 5,
pp. 1224–1244, May 2018.
15)X. Zhang, H. Xiong, W. Zhou, W. Lin, and Q. Tian, ‘‘Picking neural acti-
vations for fine-grained recognition,’’ IEEE Trans. Multimedia, vol. 19,
no. 12, pp. 2736–2750, Dec. 2017.
16)L. Lin, G. Wang, W. Zuo, X. Feng, and L. Zhang, ‘‘Cross-domain
visual matching via generalized similarity measure and feature learning,’’
IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1089–1102,
Jun. 2017.
Image Matching from Handcrafted to Deep Features: A Survey

Image Features Detection, Description and Matching

 

https://medium.com/data-breach/introduction-to-feature-detection-and-matching-65e27179885d

https://medium.com/data-breach/introduction-to-orb-oriented-fast-and-rotated-brief-4220e8ec40cf

Introduction to Harris Corner Detector

Introduction to SIFT (Scale-Invariant Feature Transform)

Introduction to SURF (Speeded-Up Robust Features)

Introduction to FAST (Features from Accelerated Segment Test)

Introduction to BRIEF (Binary Robust Independent Elementary Features)

Introduction to ORB (Oriented FAST and Rotated BRIEF)

 

 

 

https://www.cnblogs.com/alexme/p/11345701.html

ORB 特征提取算法(理论篇)

关于二进制描述符的教程
描述符简介

https://gilscvblog.com/2013/08/18/a-short-introduction-to-descriptors/#more-3

https://gilscvblog.com/2013/08/26/tutorial-on-binary-descriptors-part-1/

https://gilscvblog.com/2013/09/19/a-tutorial-on-binary-descriptors-part-2-the-brief-descriptor/

https://gilscvblog.com/2013/10/04/a-tutorial-on-binary-descriptors-part-3-the-orb-descriptor/

https://gilscvblog.com/2013/11/08/a-tutorial-on-binary-descriptors-part-4-the-brisk-descriptor/

https://gilscvblog.com/2013/12/09/a-tutorial-on-binary-descriptors-part-5-the-freak-descriptor/

向BRIEF描述符添加旋转不变性

https://gilscvblog.com/2015/01/02/adding-rotation-invariance-to-the-brief-descriptor/

二进制描述符的性能评估–引入LATCH描述符

https://gilscvblog.com/2015/11/07/performance-evaluation-of-binary-descriptor-introducing-the-latch-descriptor/

 

 

 

https://sci-hub.tw

A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK

 

知网:

局部二值描述子的研究进展综述_边后琴.pdf

局部二进制特征描述算法综述_白丰.pdf

 

https://gilscvblog.com/2015/11/07/performance-evaluation-of-binary-descriptor-introducing-the-latch-descriptor/

Performance Evaluation of Binary Descriptors – Introducing the LATCH descriptor

 

 

http://www.diva-portal.org/smash/get/diva2:927480/FULLTEXT01.pdf

 

https://link.springer.com/content/pdf/10.1007/s10846-017-0762-8.pdf

Mouats2018_Article_PerformanceEvaluationOfFeature.pdf

 

 

https://medium.com/machine-learning-world/feature-extraction-and-similar-image-search-with-opencv-for-newbies-3c59796bf774

 

 

  • 3
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值