ML Lecture 1: Regression - Case Study
本笔记有配套的Jupyter Notebook演练,包含tensorflow基础api实现的单变量线性回归与多元线性回归,以及对梯度下降训练过程的改进讲解,同时包括高级lib如sklearn与keras的线性回归实现。欢迎在读完笔记后去实际演练一下哟~
如果觉得本系列文章对您有帮助的话,麻烦不吝在对应的github项目上点个star吧!非常感谢!您的支持就是我继续创作的动力哟!
- Application Examples of Regression
In Reality: stock market forecast, self-driving car and recommendation.
In Pokemon: estimating the Combat Power (CP) of a Pokemon after evolution.
Steps to do Regression
Step 1: Model
Define a model (a set of function) that maps x to y, different parameters make different functions, and we need training data to tell us the right parameter values.
- Linear Model
As long as relationship between y and x can be written as:
y = b + Σ w i x i y=b+\Sigma w_ix_i y=b+Σwixi
Here, x i x_i xi is an attribute of input x (also known as feature). w i w_i wi called weight and b b b called bias. Then, we call this kind of model linear model.
Step 2: Goodness of Function
In task of predicting Pokemon CP, we need to collect lots of data consisting of CP before and after Pokemon evolve.
x 1 x^1 x1 stands for CP before evolution/original CP (‘1’ as superscript stands for a single example marked as 1). y ^ 1 \hat{y}^1 y^1 stands for CP after evolution (hat “^” means this is a real value for the input).
Assume we have 10 training examples:
Every blue point in the coordinate represents a example.
Loss Function L
Loss funtion is a “function of function”. Its input is a function while its output is how bad the given function is.
L ( f ) = L ( w , b ) = Σ n = 1 10 ( y ^ n − ( b + w ∗ x c p n ) ) 2 L(f)=L(w,b)=\Sigma^{10}_{n=1}(\hat{y}^n-(b+w*x^n_{cp}))^2 L(f)=L(w,b)=Σn&