WideBaselineFeatureMatcher一些细节

WideBaselineFeatureMatcher

[1] CODE: Coherence Based Decision Boundaries for Feature Correspondence, IEEE TPAMI,2016, Lin et. al.
[2] RepMatch: Robust Feature Matching and Pose for Reconstructing Modern Cities, ECCV, 2016,, Lin et. al.

刚开始以为是高斯混合回归:

Feature-guided Gaussian mixture model for image matching

https://yuan-gao.net/pdf/PR2019.pdf

Feature Guided Biased Gaussian Mixture Model for image matching

https://www.researchgate.net/publication/269288307_Feature_Guided_Biased_Gaussian_Mixture_Model_for_image_matching

https://github.com/Xiaoyang-Rebecca/PatternRecognition_Matlab/blob/master/GMMClassifier.m 

这是期望最大化(EM)算法的实现,用于使用Python 3中的高斯混合模型(GMM)进行聚类

https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/GMM

http://www.escience.cn/people/jiangjunjun

https://github.com/gwang-cv/Point-Set-Matching-Registration-Material

http://web.eecs.umich.edu/~cscott/code.html

后来发现不是。。。。

 

回归的最大似然估计

https://medium.com/quick-code/maximum-likelihood-estimation-for-regression-65f9c99f815d

Generalized Huber Regression

https://towardsdatascience.com/generalized-huber-regression-505afaff24c

https://cran.r-project.org/web/packages/robustreg/index.html

用于机器学习的高斯过程库

https://github.com/mblum/libgp

 

http://dlib.net/ml.html

 

是线性回归+Huber loss + 谱聚类的那种高斯相似矩阵

具体如下:

Huber loss

鲁棒学习:L1约束的Huber损失最小化

https://blog.csdn.net/u012366767/article/details/81564622

 Huber损失最小化学习和高斯核模型 的Huber损失最小化,竞赛用到过,效果不错

http://www.pudn.com/Download/item/id/3303498.html

Matlab代码:

https://www.mathworks.com/help/stats/regressionkernel.loss.html

Huber Loss

https://www.cnblogs.com/nowgood/p/Huber-Loss.html

线性拟合——从最大似然估计到平方误差到huber loss

https://blog.csdn.net/lanchunhui/article/details/50422230

谱聚类(spectral clustering)原理总结

https://www.cnblogs.com/pinard/p/6221564.html

第三种定义:

项目笔记(四) 实验——通过对midi自相似矩阵的高斯核对角卷积分割乐段

https://blog.csdn.net/weixin_41405111/article/details/80274197

聚类 -- 数据挖掘

https://blog.csdn.net/leoch007/article/details/80027056

MSE, MAE, Huber loss详解

https://www.cnblogs.com/hansjorn/p/11458031.html

决策树(十)--GBDT及OpenCV源码分析

https://blog.csdn.net/App_12062011/article/details/52150578

统计中,Huber损失函数的参数是怎么确定的?

https://www.zhihu.com/question/21018545

Likelihood Regression
Affine motion regression -> x
Affine motion regression -> y

 

  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值