Computer vision 1

1.SVM

  1. Lagrangian Theory
    Given an optimization problem with the objective function f(w) and equality constrains g i g_i gi(w)=0

L ( w , α ) = f ( w ) − ∑ i = 1 n a i g i ( w )    L ( w , b , a ) = 1 2 ∥ w ∥ 2 − ∑ a i ( y i ( w ⋅ x i + b ) − 1 ) i = 1 N L(w,\alpha)=f(w)-\sum_{i=1}^na_ig_i(w)\\\;L(w,b,a)=\frac12\parallel w\parallel^2-\overset N{\underset{i=1}{\sum a_i(y_i(w\cdot x_i+b)-1)}} L(w,α)=f(w)i=1naigi(w)L(w,b,a)=21w2i=1ai(yi(wxi+b)1)N

  1. Karush-Kuhn-tacker(KKT) constraint
    For a solution in nonlinear programming, consider the following nonlinear minimization or maximization problem

∂ L ( w , α ) ∂ w = 0 g i ( w ∗ ) ≥ 0 α i ≥ 0 α i g i ( w ∗ ) = 0    \begin{array}{l}\frac{\partial L(w,\alpha)}{\partial w}=0\\g_i(w^\ast)\geq0\\\alpha_i\geq0\\\alpha_ig_i(w^\ast)=0\;\end{array} wL(w,α)=0gi(w)0αi0αigi(w)=0

  1. Support Vector

So if the points not on the boundary do not contribute and the α i \alpha_i αiwill be 0

L ( w , b , α ) = ∑ i = 1 n α i − 1 2    ∑ i , j = 1 n α i α j x i x j y i y j L(w,b,\alpha)=\sum_{i=1}^n\alpha_i-\frac12\;\sum_{i,j=1} ^n\alpha_i\alpha_jx_ix_jy_iy_j\\\\ L(w,b,α)=i=1nαi21i,j=1nαiαjxixjyiyj

  1. Kernel Trick
  • To solve the problem calculate in high-dimension: Project data to much higher dimension space so that a hyper-plane can be found/eliminate the computationally expensive operations in the higher dimension
  • For example: Polynomial equivalent to project to six-dimensional space

2 Random forest

  1. Bagging - Bootstrap Aggregation
  • The training dataset is a difference while using the same algorithms
  • The random forest takes advantage of this by allowing each individual tree to randomly sample from the dataset with replacement, resulting in different trees. This process is known as bagging.
  1. Random Forest
  • The training dataset is the same while the algorithms are different
  • Each tree in a random forest can pick only from a random subset of features. This forces even more variation amongst the trees in the model and ultimately results in lower correlation across trees and more diversification.
  1. the basic theory of random forest
  • The reason for this wonderful effect is that the tree protect each other from their individual error
  • Two prerequisites: actual signal in feature and prediction made by the individual have low correlation with each other

3 Tips

  • Conduct validation
  • Conduct cross-validation : the training is small
  • Conduct leave-one-out: Training with all the data except one and repeat the process
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值