李宏毅机器学习Lecture 1:回归 - 案例研究

这篇博客是李宏毅机器学习课程的第一讲,深入探讨了回归问题,特别是线性回归。通过实例展示了从模型定义、函数优度到找到最佳函数的步骤,包括梯度下降法的应用。此外,讨论了模型选择、增加数据和正则化的重要性,强调避免过拟合并选择合适的模型。
摘要由CSDN通过智能技术生成

ML Lecture 1: Regression - Case Study

本笔记有配套的Jupyter Notebook演练,包含tensorflow基础api实现的单变量线性回归与多元线性回归,以及对梯度下降训练过程的改进讲解,同时包括高级lib如sklearn与keras的线性回归实现。欢迎在读完笔记后去实际演练一下哟~

如果觉得本系列文章对您有帮助的话,麻烦不吝在对应的github项目上点个star吧!非常感谢!您的支持就是我继续创作的动力哟!

  • Application Examples of Regression

In Reality: stock market forecast, self-driving car and recommendation.

In Pokemon: estimating the Combat Power (CP) of a Pokemon after evolution.

Steps to do Regression

Step 1: Model

Define a model (a set of function) that maps x to y, different parameters make different functions, and we need training data to tell us the right parameter values.

  • Linear Model

As long as relationship between y and x can be written as:
y = b + Σ w i x i y=b+\Sigma w_ix_i y=b+Σwixi
Here, x i x_i xi is an attribute of input x (also known as feature). w i w_i wi called weight and b b b called bias. Then, we call this kind of model linear model.

Step 2: Goodness of Function

In task of predicting Pokemon CP, we need to collect lots of data consisting of CP before and after Pokemon evolve.

x 1 x^1 x1 stands for CP before evolution/original CP (‘1’ as superscript stands for a single example marked as 1). y ^ 1 \hat{y}^1 y^1 stands for CP after evolution (hat “^” means this is a real value for the input).

Assume we have 10 training examples:

Every blue point in the coordinate represents a example.

Loss Function L

Loss funtion is a “function of function”. Its input is a function while its output is how bad the given function is.

L ( f ) = L ( w , b ) = Σ n = 1 10 ( y ^ n − ( b + w ∗ x c p n ) ) 2 L(f)=L(w,b)=\Sigma^{10}_{n=1}(\hat{y}^n-(b+w*x^n_{cp}))^2 L(f)=L(w,b)=Σn&

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值