课已经听了一半,现在从头整理一遍思路。不然内容太多,最后就全忘了。对内容的掌握还没有达到能生成的地步,也没有实战经验,初稿只是把要点罗列一下,将来不断加入感悟吧。
1.Introduction
机器学习是一个交叉学科,涉及神经科学,计算生物学,机器人学,自然语言处理等等。它是计算机的一种新能力,即在未显式编程下使机器完成一些任务,例如预测数据走势,模式识别,自动驾驶等等。本课程涉及到这样几个Topic:
监督学习算法 Supervised Learning Algorithm
即训练样本包含正确结果,即 S={(x(i),y(i))} ,根据 y(i) 的取值分为:continuous case,称为regression,例如我们熟知的linear regression;
discrete case, 称为classification,若有k种可能的类别,则 y(i)∈{0,1,...,k}
其实回归和分类完全可以互相转化,连续的 y 值可以通过划分区间来discretize,分类当k足够大时也可以近似回归的效果
Learning theory
极度重要,讲sample complexity,generalized error的关系,模型/算法选择等等无监督学习算法 Unsupervised LA
即不给出正确结果,试图让机器自行探索出数据的潜在规律。如clustering增强学习
Supervised Learning
Our goal is, given a training set, to learn a function h mapping from X to Y so that h(x) is a “good” predictor for the corresponding value of y, the process is:
显然,这个LA怎么得出h,是关键。目前见到的几种主要算法,如Linear Regression,SVM,是设定一个衡量误差的cost function,然后求出最小化它的参数;而Logistic Regression,Naive Bayes,Gaussion Discriminative Analysis,Bayesian Logistic Regression,则是采取类似于极大似然估计的方法。其实本质上是相通的,现在还不能点透
Conventional Notations:
- m = #training examples
- n = # features
- x = features
- y = target
- θ = parameters
- h = hypothesis 即 hθ(x) ,which is x parametrized by θ
- J( θ ) = cost funtion ,用来衡量误差,需要人工设定
2.线性回归算法(Linear Regression)
线性回归算法是最简单的监督学习算法,它属于回归算法。思路是显式地取假设集为 H={hθ(x)=∑ni=0θixi}(x0:=0) ,再选取cost function为 J(θ)=12(hθ(x)−y)2 ,再利用批梯度下降/随机梯度下降/公式/牛顿法…求出参数 θ 。直观上就是用“直线”拟合n+1维空间中的m个点。
Batch Gradient Descent
想法:从山(曲面 J=J(θ⃗ ) )的某个地方出发,要到某个谷底(极小值点),怎么走呢?环顾四周,看那个方向下降的最快,迈一小步。不断重复这个过程。注意这是求极值/优化的算法,并不是LA
- Goal
minθ⃗ J(θ⃗ ) - Algo
Repeat till convergence:
θ⃗ :=θ⃗ −α∇J(θ⃗ )
α 可以控制步伐大小(Learning Rate),但当接近极值点时, ∇J(θ) 会变小,故步伐往往是趋于0的。步伐太大会迈过极值点,步伐太小则学习较慢
Choice on Cost Function
在二维的情况下,就是画一条直线穿过m个点,尽可能拟合地比较好,自然可以选择样本点到预测点的距离来衡量误差,故可以尝试:( 12 只是让结果seem nice)
Probabilistic Interpretation
还有一种对选择 J(θ)=12(hθ(x)−y)2 的理解,就是假设误差服从高斯分布,然后用极大似然估计得出 θ
Let y(i)=θTx(i)+ε(i) , where ε(i) is the error.
Assume
ε(i)∼N(0,σ2)
, which to some extent makes sense,since according to the Central Limit Theorem, the sum of many independent r.v.s(which in this case are the unmodeled effects) tends to conform to Gaussian Dist.
This implies that
where “ ;θ ” simply suggests “parametrized by θ “,since now we apply the Frequentist School of Probability, who views θ as merely some fixed value somehow unknown to us momentarily.
Now we define the likelihood function:
where X is the design matrix containing all the training examples.
Our goal of getting the best
h
using the training set could then be interpreted as maximizing
Now we also assume that ε(i) ’s are i.i.d , which is again pretty convincing unless you get a bad training set. Then,
For convenience,define the log-likelihood function l(θ)=lnL(θ)
therefore maximizing the log-likelihood function is exactly the same thing as minimizing the chosen cost function 12∑mi=1[hθ(x(i))−y(i)]2 .
确定了 J(θ) 之后,如果用Batch Gradient Descent,可以得到
求和是对所有样本点,也就是每一次下降都试图减小 总距离,当training set较大时,每次下降都需要求和,时间复杂度太大,故引入Stochastic Gradient Descent
Stochastic Gradient Descent
Instead of using all training examples in every iteration,we use only one each time.
Repeat for j = 1 to m:
关于两种GD的比较,待实战经验丰富之后再补充。
Formula Method
Linear Regression比较简单, θ 甚至可以用公式表示
引入矩阵导数:
f:Rm+n↦R,(∇Af(A))ij:=∂f∂Aij
Properties:
Then
setting
∇θJ(θ)=0
shall give us
θ
using the properties to eliminate θT
plug in and we have:
and this result is called “Normal Equation”.