Normalization vs Regularization in Machine or Deep learning

Normalisation adjusts the data; regularisation adjusts the prediction function.

It is well-known that normalizing the input data makes training faster. If your data are on very different scales (esp. low-to-high range), you likely want to normalise the data: alter each column to have the same (or compatible) basic statistics, such as standard deviation and mean. This is helpful to keep your fitting parameters on a scale that the computer can handle without a damaging loss of accuracy.

One goal of model training is to identify the signal (important features) and ignore the noise (random variation not really related to classification). If you give your model free rein to minimize the error on the given data, you can suffer from overfitting: the model insists on predicting the data set exactly, including those random variations.

Regularisation imposes some control on this by rewarding simpler fitting functions over complex ones. For instance, it can promote that a simple log function with a RMS error of x is preferable to a 15th-degree polynomial with an error of x/2. Tuning the trade-off is up to the model developer: if you know that your data are reasonably smooth in reality, you can look at the output functions and fitting errors, and choose your own balance.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值