Normal Equation
Note: [8:00 to 8:44 - The design matrixX (in the bottom right side of the slide) given in the example should haveelements x with subscript 1 and superscripts varying from 1 to m because forall m training sets there are only 2 features x0 and x1. 12:56 - The Xmatrix is m by (n+1) and NOT n by n. ]
Gradient descentgives one way of minimizing J. Let’s discuss a second way of doing so, thistime performing the minimization explicitly and without resorting to aniterative algorithm. In the "Normal Equation" method, we will minimize J by explicitly taking its derivatives with respect to the θj ’s, andsetting them to zero. This allows us to find the optimum theta without iteration. The normal equation formula is given below:
θ=(XTX)−1XTy
There is noneed to do feature scaling with the normal equation.
The following isa comparison of gradient descent and the normal equation:
Gradient Descent | Normal Equation |
Need to choose alpha | No need to choose alpha |
Needs many iterations | No need to iterate |
O (kn2) | O (n3), need to calculate inverse of XTX |
Works well when n is large | Slow if n is very large |
With the normal equation, computing the inversion has complexity O(n3). So if we havea very large number of features, the normal equation will be slow. In practice,when n exceeds 10,000 it might be a good time to go from a normal solution toan iterative process.