# 吴恩达机器学习：逻辑回归

### 预测函数

${h}_{\theta }\left(x\right)={\theta }_{0}+{\theta }_{1}{x}_{1}+{\theta }_{2}{x}_{2}+\cdots +{\theta }_{n}{x}_{n}={\theta }^{T}x$

${h}_{\theta }\left(x\right)=g\left({\theta }^{T}x\right)$
$g\left(z\right)=\frac{1}{1+{e}^{-z}}$

${h}_{\theta }\left(x\right)=P\left(y=1|x;\theta \right)$

### 代价函数

$J\left(\theta \right)=-\frac{1}{m}\left({y}^{T}log\left(g\left(X\theta \right)\right)+\left(1-y{\right)}^{T}log\left(1-g\left(X\theta \right)\right)\right)$

### 梯度下降

$\mathrm{\nabla }J\left(\theta \right)=\frac{1}{m}{X}^{T}\left(g\left(X\theta \right)-y\right)$

### 正则化

$g\left({\theta }_{0}+{\theta }_{1}{x}_{1}+{\theta }_{2}{x}_{2}+\cdots +{\theta }_{26}{x}_{1}{x}_{2}^{5}++{\theta }_{27}{x}_{2}^{6}\right)$

$J\left(\theta \right)=-\frac{1}{m}\left({y}^{T}log\left(g\left(X\theta \right)\right)+\left(1-y{\right)}^{T}log\left(1-g\left(X\theta \right)\right)\right)+\frac{\lambda }{2m}{\theta }^{T}\theta$

$J\left(\theta \right)=\frac{1}{2m}\left(X\theta -y{\right)}^{T}\left(X\theta -y\right)+\frac{\lambda }{2m}{\theta }^{T}\theta$

$\theta =\left({X}^{T}X+\lambda \left[\begin{array}{ccccc}0& \cdots & \cdots & \cdots & 0\\ 0& 1& \cdots & \cdots & 0\\ ⋮& ⋮& 1& \cdots & 0\\ ⋮& ⋮& \cdots & \ddots & ⋮\\ 0& 0& \cdots & \cdots & 1\end{array}\right]{\right)}^{-1}{X}^{T}y$

So~，第二周的内容就是这些了，谢谢大家耐心阅读。