# Coursera机器学习笔记(三) - 多变量线性回归

Coursera机器学习笔记(三) - 多变量线性回归

## 一. 假设函数, 梯度下降

##### 1.1 假设函数

${h}_{\theta }\left(x\right)={\theta }_{0}+{\theta }_{1}x$${h_\theta(x)=\theta_0+\theta_1x}$

${h}_{\theta }\left(x\right)={\theta }_{0}+{\theta }_{1}{x}_{1}+{\theta }_{2}{x}_{2}+\dots +{\theta }_{n}{x}_{n}$${h_\theta(x)=\theta_0+\theta_1x_1+\theta_2x_2+…+\theta_nx_n}$

$\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\theta =\left[\begin{array}{c}{\theta }_{0}\\ {\theta }_{1}\\ {\theta }_{2}\\ .\\ .\\ .\\ {\theta }_{n}\end{array}\right]\in \mathrm{I}\phantom{\rule{negativethinmathspace}{0ex}}{\mathrm{R}}^{\mathrm{n}+1}\phantom{\rule{1em}{0ex}},\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}$${\qquad\qquad\theta=\begin{bmatrix}\theta_0\\ \theta_1\\ \theta_2\\.\\.\\.\\ \theta_n \end{bmatrix}\in \rm I\!R^{n+1}\quad,\qquad\qquad}$$x=\left[\begin{array}{c}{x}_{0}\\ {x}_{1}\\ {x}_{2}\\ .\\ .\\ .\\ {x}_{n}\end{array}\right]\in \mathrm{I}\phantom{\rule{negativethinmathspace}{0ex}}{\mathrm{R}}^{\mathrm{n}+1}$${x=\begin{bmatrix}x_0\\x_1\\x_2\\.\\.\\.\\x_n \end{bmatrix}\in \rm I\!R^{n+1}}$

${h}_{\theta }\left(x\right)={\theta }_{0}{x}_{0}+{\theta }_{1}{x}_{1}+{\theta }_{2}{x}_{2}+\dots +{\theta }_{n}{x}_{n}={\theta }^{T}x$${h_\theta(x)=\theta_0x_0+\theta_1x_1+\theta_2x_2+…+\theta_nx_n= \theta^Tx}$

## 二. 特征处理

##### 2.2 均值归一化

${x}_{i}=\frac{{x}_{i}-{\mu }_{i}}{max-min}$${x_i=\frac{x_i-\mu_i}{max-min}}$或者,${x}_{i}=\frac{{x}_{i}-{\mu }_{i}}{{\sigma }_{i}}$${x_i=\frac{x_i-\mu_i}{\sigma_i}}$

## 四. 特征选择与多项式回归

$h\left(\theta \right)={\theta }_{0}+{\theta }_{1}×frontage+{\theta }_{1}×depth$$h(\theta)=\theta_0+\theta_1\times frontage+\theta_1\times depth$

$h\left(\theta \right)={\theta }_{0}+{\theta }_{1}x$$h(\theta)=\theta_0+\theta_1x$

pinv(x'*x)*x'*y

• 广告
• 抄袭
• 版权
• 政治
• 色情
• 无意义
• 其他

120