Just as last bog,input Octave commands show in the figure below:
There are something different from last example with one variable,We should add a step named Feature Normalization.
using Feature Normalization will speed up your gradient descent.
To make it come true ,input commands which show in the picture below:
J of theta can also write like this:
so ,we can use vectorized version of J of theta like this:
also does Gradient Descent function:
Then we get theta!