machine_learning
skmygdrs
这个作者很懒,什么都没留下…
展开
-
机器学习2 梯度下降
梯度下降规则θi:=θi−α∂∂θiJ(θ)∂∂θiJ(θ)=∂∂θi12(hθ(x)−y)2=(hθ(x)−y)∂∂θi(θ0x0+...+θixi+θnxn−y)=(hθ(x)−y)xiθi:=θi−α(hθ(x)−y)xi\eqalign{ & {\theta _i}: = {\theta _i} - \alpha {\partial \over {\partial {\theta _i原创 2016-07-19 18:23:35 · 291 阅读 · 0 评论 -
机器学习3 logistic回归
局部权重似合原创 2016-07-20 21:29:11 · 363 阅读 · 0 评论 -
机器学习4 - 牛顿法求最值
公式θ(t+1)=θ(t)−H−1∇θl{\theta ^{(t + 1)}} = {\theta ^{(t)}} - {H^{ - 1}}{\nabla _\theta }l海森矩阵Hij=∂2l∂θi∂θj{H_{ij}} = {{{\partial ^2}l} \over {\partial {\theta _i}\partial {\theta _j}}}推导泰勒展开式f(x)=f(x0)+原创 2016-07-28 14:19:40 · 575 阅读 · 0 评论 -
机器学习5 多维正态分布(高斯分布)
概率密度p(x)=1(2π)n2|β|12exp(−12(x−α)Tβ−1(x−α))p(x) = {1 \over {{{\left( {2\pi } \right)}^{{n \over 2}}}{{\left| \beta \right|}^{{1 \over 2}}}}}\exp \left( { - {1 \over 2}{{\left( {x - \alpha } \right)}^T原创 2016-07-30 23:45:14 · 6813 阅读 · 0 评论 -
机器学习5 高斯判别法
如果 yy~Bernoulli(ϕ)Bernoulli(\phi ) x|y=1x|y=1~N(μ0,Σ)N({\mu _0},\Sigma ) x|y=0x|y=0~N(μ1,Σ)N({\mu _1},\Sigma ) 则 p(y)=ϕy(1−ϕ)1−yp(y) = {\phi ^y}{(1 - \phi )^{1 - y}} p(x|y=1)=1(2π)n2|Σ|12exp(−12(原创 2016-08-01 10:34:47 · 1045 阅读 · 0 评论 -
机器学习6 函数间隔与几何间隔
functionalmarginfunctional{\kern 1pt} {\kern 1pt} m\arg in γ(i)=y(i)(wTx(i)+b){\gamma ^{\left( i \right)}} = {y^{\left( i \right)}}\left( {{w^T}{x^{\left( i \right)}} + b} \right) geometricmargingeom原创 2016-08-02 20:57:02 · 458 阅读 · 0 评论 -
机器学习5 指数族分布与logistic分布
P(x|y=1,η1)=b1(x)e(η1TT1(x)−a1(η1))P(x|y = 1,{\eta _1}) = {b_1}(x){e^{({\eta _1}^T{T_1}(x) - {a_1}({\eta _1}))}} p(x|y=0,η0)=b0(x)e(η0TT0(x)−a0(η0))p(x|y = 0,{\eta _0}) = {b_0}(x){e^{({\eta _0}^T{T_0}原创 2016-08-02 20:53:34 · 2632 阅读 · 0 评论 -
机器学习7 拉格朗日乘数法
{z=f(x,y)h(x,y)=0}→minf(x,y)h(x,y)=0⇒y=φ(x)∂h(x,y)∂x+∂h(x,y)∂yd(φ(x))d(x)=0f(x,y)=f(x,φ(x))⇒∂f(x,y)∂x+∂f(x,y)∂yd(φ(x))d(x)=0⇔⎧⎩⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪h(x,y)=0∂h(x,y)∂x+∂h(x,y)∂yd(φ(x))d(x)=0∂f(x,y)∂x+∂f(x,y)∂yd(原创 2016-08-03 10:30:21 · 431 阅读 · 0 评论 -
机器学习7 拉格朗日KKT条件
minwf(w)primalconstraints:s.t.{gi(w)<=0,i=1,...,khi(w)=0,i=1,...,l[eq.0]L(w,α,β)=f(w)+∑i=1kαigi(w)+∑i=1kβihi(w)[eq.1]θP(w)=maxα,β:αi>=0L(w,α,β)[eq.2]⇒θP(w)={f(w)ifwsatisfieseq.0∞otherwiseminwθp(w)=minw原创 2016-08-03 10:32:19 · 803 阅读 · 0 评论