tensorflow2添加l1,l2 regularization

l1,l2 regularization

摘自tensorflow_core.python.keras.regularizers的class Regularizer:

Regularizers allow you to apply penalties on layer parameters or
layer activity during optimization.These penalties are summed into the
loss function that the network optimizes.

正则项可以在optimizer优化时对layer中的参数或者layer activity添加惩罚项,这些惩罚项最后会加到optimizer所优化的loss function里

在隐藏层中经常会使用正则来作为损失函数的惩罚项。换言之,为了约束w的可能取值空间从而防止过拟合,我们为该最优化问题加上一个约束,就是w的L1范数或者L2范数不能大于给定值。

L2正则被用于防止模型的过拟合,L1正则项被用于产生稀疏权值矩阵。

tf.keras.regularizers提供了几种内置类来提供正则。分别是class L1、class L1L2、class L2、class Regularizer 、serialize。同时提供了三个参数作为可被正则化的对象:

Regularization penalties are applied on a per-layer basis. The exact
API will depend on the layer, but many layers (e.g. Dense, Conv1D,
Conv2D and Conv3D) have a unified API.
These layers expose 3 keyword arguments:

  • kernel_regularizer: Regularizer to apply a penalty on the layer’s kernel
  • bias_regularizer: Regularizer to apply a penalty on the layer’s bias
  • activity_regularizer: Regularizer to apply a penalty on the layer’s output

在每层基础上应用这个正则化惩罚,确切的API由该层决定,但是Dense, Conv1D, Conv2D and
Conv3D这些层有统一的API,有三种参数:

  • kernel_regularizer:对该层中的权值矩阵layer.weights正则
  • bias_regularizer:对该层中的偏差矩阵layer.bias正则
  • activity_regularizer:对该层的输出值矩阵layer.bias正则

设置l1,l2正则项

l1,l2正则化公式:

tf.keras.regularizers.l1(0.01)
ℓ 1    p e n a l t y = ℓ 1 ∑ i = 0 n ∣ x i ∣ \ell_1\,\,penalty =\ell_1\sum_{i=0}^n|x_i| 1penalty=1i=0nxi
tf.keras.regularizers.l2(0.01)
ℓ 2    p e n a l t y = ℓ 2 ∑ i = 0 n x i 2 \ell_2\,\,penalty =\ell_2\sum_{i=0}^nx_i^2 2penalty=2i=0nxi2
tf.keras.regularizers.l1_l2(l1=0.01, l2=0.01)
l o s s e s = L 1 + L 2 = l 1 ∑ ( ∣ w i j ∣ ) + l 2 ∑ ( w i j 2 ) losses=L_1+L_2=l_1\sum(|w_{ij}|)+l_2\sum(w_{ij}^2) losses=L1+L2=l1(wij)+l2(wij2)
Arguments:
l1: Float; L1 regularization factor.
l2: Float; L2 regularization factor.
Returns:
An L1L2 Regularizer with the given regularization factors.

设置前:

conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)

设置后:

conv1 = Conv2D(32, (3, 3), activation='relu', padding='same',kernel_regularizer=tf.keras.regularizers.l1(0.01))(inputs)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same',kernel_regularizer=tf.keras.regularizers.l1(0.01))(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)

在构造网络层时,将’kernel_regularizer’参数设为l2正则化函数,则tensorflow会将该权重变量(卷积核)的l2正则化项加入到集合 tf.GraphKeys.REGULARIZATOIN_LOSSES里。
在计算loss时使用tf.get_collection()来获取tf.GraphKeys.REGULARIZATOIN_LOSSES集合,然后相加即可:

l2_loss = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)

参考链接:
[tensorflow2.0]tensorflow2.0提供的惩罚项(L1正则,L2正则)API
tensorflow实现L2正则化的方法总结

  • 3
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值