Learning:Regularization method

1.Underfit and overfit

Using machine learning to fit curves can lead to underfitting and overfitting.

This underfit and overfit

Regression and classfication

2.How to solve this problem?

First :collect more training examples

Second :select features to include/exclude

Third :regularzation

3.Regularization method

Keep the redundant parameters as small as possible.

Add the sum of squares of the redundant parameters to the end of the cost function expression.

If λ is 0, i.e., no regularization is used, and the curve may still be fluctuating(波动)

If λ is very large, i.e., the regularization term is so important that all parameters are basically 0, then the model is roughly equal to the bias, is a horizontal straight line, and is in an underfitting state

4.L1 and L2 regularization methods

L1 regularization is very similar to L2 regularization, so let's put them together. The two regularization methods are to add an item to the loss function to prevent overfitting.

When you add a regularization term, the values of the parameters of the model become smaller, and generally speaking, the smaller the values of the parameters in the model, the simpler the model tends to be, so you can prevent overfitting in this way.

L2 regularization:

L2 regularization adds L2 penalty terms to the original loss function of the model to obtain the function that needs to be minimized for training. The penalty term is the product of the sum of squares of each element of the model weight parameter and a positive constant.

L1 regularization:

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值