正规方程(Normal Equation)
到目前为止,我们都在使用梯度下降算法将代价函数J(θ)最小化。但对于某些线性回归问题,我们引入正规方程来求解最优的θ值,从而使得代价函数J(θ)最小化。
正规方程是通过求解如下的方程来使得代价函数J(θ)最小的参数θ的值:
假设我们使用如下数据集作为我们的训练集:
我们可以构建出如下数据表:
x0 | x1 | x2 | x3 | x4 | y |
---|---|---|---|---|---|
1 | 2104 | 5 | 1 | 45 | 460 |
1 | 1416 | 3 | 2 | 40 | 232 |
1 | 1534 | 3 | 2 | 30 | 315 |
1 | 852 | 2 | 1 | 36 | 178 |
其中,x0为我们添加的特征变量,这样我们由x0 至 x4可构建训练集特征矩阵X,由y可构建训练集结果矩阵Y。至此,我们利用正规方程解出参数θ = (XTX)-1XTY。
在Octave中,正规方程写为:pinv(X'X)X'*Y。
注:对于不可逆的矩阵(通常特征变量存在线性相关或特征变量数量过多,即特征变量数量大于训练集中的训练数据。),正规方程方法不可使用。
梯度下降算法与正规方程法的比较:
梯度下降算法 | 正规方程 |
---|---|
需要选择学习率α | 不需要 |
需要多次迭代 | 一次运算得出 |
当特征数量n越大时越适用 | 通常当特征数量n≤10000时适用 |
补充笔记
Normal Equation
Gradient descent gives one way of minimizing J. Let’s discuss a second way of doing so, this time performing the minimization explicitly and without resorting to an iterative algorithm. In the "Normal Equation" method, we will minimize J by explicitly taking its derivatives with respect to the θj ’s, and setting them to zero. This allows us to find the optimum theta without iteration. The normal equation formula is given below:
θ = (XTX)-1XTy
There is no need to do feature scaling with the normal equation.
The following is a comparison of gradient descent and the normal equation:
Gradient Descent | Normal Equation |
---|---|
Need to choose α | No need to choose α |
Needs many iterations | No need to iterate |
O(kn2) | O(n3, need to calculate inverse of XTX) |
Works well when n is large | Slow if n is very large |
With the normal equation, computing the inversion has complexity O(n3). So if we have a very large number of features, the normal equation will be slow. In practice, when n exceeds 10,000 it might be a good time to go from a normal solution to an iterative process.
Normal Equation Noninvertibility
When implementing the normal equation in octave we want to use the 'pinv' function rather than 'inv.' The 'pinv' function will give you a value of θ even if XTX is not invertible.
If XTX is noninvertible, the common causes might be having :
- Redundant features, where two features are very closely related (i.e. they are linearly dependent)
- Too many features (e.g. m ≤ n). In this case, delete some features or use "regularization" (to be explained in a later lesson).
Solutions to the above problems include deleting a feature that is linearly dependent with another or deleting one or more features when there are too many features.