# [深度之眼机器学习训练营第四期]线性回归

### 基本概念

$h_\theta(x) = \theta_0 + \theta_{1}x_1 + \theta_2 x_2$

### 损失函数

$J(\theta) = \frac {1}{2n} \sum_{i=1}^n \left( h_\theta (x^{(i)}) - y^{(i)} \right)^2$

### 参数学习

#### 梯度下降法

$\theta_j \coloneqq \theta_j - \alpha\frac{\partial}{\partial\theta_j}J(\theta)$

\begin{aligned} \frac{\partial}{\partial \theta_j} J(\theta) &= \frac{\partial}{\partial \theta_j} \frac{1}{2} (h_\theta(x) - y)^2 \\ &=2 \cdot \frac{1}{2} (h_\theta(x) - y) \cdot \frac{\partial}{\partial \theta_j} (h_\theta(x) - y) \\ &=(h_\theta(x) - y)\cdot\frac{\partial}{\partial \theta_j}\left(\sum_{i=0}^{d}\theta_i x_i - y\right)\\ &=(h_\theta(x) - y)x_j \end{aligned}

$\theta_j \coloneqq \theta_j - \alpha\left(h_\theta(x^{(i)})-y^{(i)}\right)x_j^{(i)},\forall j\in\{0,1,\cdots,d\}$

$\theta \coloneqq \theta - \alpha\left(h_\theta(x^{(i)})-y^{(i)}\right)x^{(i)}$

#### 正规方程法

$X=\begin{bmatrix} —(x^{(1)})^T — \\ —(x^{(2)})^T —\\ \vdots \\ —(x^{(n)})^T— \end{bmatrix}$
$\vec{y}$为所有真实值组成的$n$维向量：
$\vec{y} = \begin{bmatrix} y^{(1)}\\ y^{(2)}\\ \vdots\\ y^{(n)}\\ \end{bmatrix}$

\begin{aligned} X\theta -\vec{y} &=\begin{bmatrix} (x^{(1)})^T\theta\\ \vdots\\ (x^{(n)})^T\theta\\ \end{bmatrix}-\begin{bmatrix} y^{(1)}\\ \vdots\\ y^{(n)}\\ \end{bmatrix}\\ &=\begin{bmatrix} h_\theta(x^{(1)})-y^{(1)}\\ \vdots\\ h_\theta(x^{(n)})-y^{(n)}\\ \end{bmatrix} \end{aligned}

\begin{aligned} J(\theta)&= \frac{1}{2}\sum_{i=1}^{n}\left(h_\theta(x^{(i)})-y^{(i)}\right)^2\\ &=\frac{1}{2} (X\theta-\vec{y})^T(X\theta-\vec{y}) \end{aligned}
$J(\theta)$求导得：
\begin{aligned} \nabla_\theta J(\theta) &= \nabla_\theta \frac{1}{2}(X\theta-\vec{y})^T(X\theta-\vec{y})\\ &= \frac{1}{2}\left((X\theta)^TX\theta-(X\theta)^T\vec{y}-\vec{y}^T(X\theta)+\vec{y}^T\vec{y}\right)\\ &= \frac{1}{2}\left(\theta^T(X^TX)\theta-\vec{y}^T(X\theta)-\vec{y}^T(X\theta)\right)\\ &=\frac{1}{2}\left(\theta^T(X^TX)\theta- 2(\vec{y}^TX)\theta\right)\\ &= \frac{1}{2}\left(\theta^T(X^TX)\theta-2(X^T\vec{y})^T\theta\right)\\ &= \frac{1}{2}(2X^TX\theta-2X^T\vec{y})\\ &= X^TX\theta-X^T\vec{y} \end{aligned}

$X^TX\theta = X^T\vec{y}$

$\theta = (X^TX)^{-1}X^T\vec{y}$

$X^TX$不可逆时，我们需要仔细检查训练集的特征，去除相关性较强的冗余特征；或者使用正则化技术。此外，还可以求解$X^TX$的伪逆。

©️2019 CSDN 皮肤主题: 精致技术 设计师: CSDN官方博客