深度学习2.0-26.Regularization减轻overfitting

本文介绍了深度学习中用于减轻过拟合的正则化技术,包括one-by-one regularization和Flexible regularization,并通过MNIST数据集进行了实战应用。
摘要由CSDN通过智能技术生成

Regularization-正则化

在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

1.one-by-one regularization

在这里插入图片描述

2.Flexible regularization

在这里插入图片描述

mnist数据集实战

import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics


def preprocess(x, y):
    x = tf.cast(x, dtype=tf.float32) / 255.
    y = tf.cast(y, dtype=tf.int32)

    return x, y


batchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())

db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)

ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz)

network = Sequential([layers.Dense(256, activation='relu'),
                      layers.Dense(128, activation='relu'),
                      layers.Dense(64, activation='relu'),
                      layers.Dense(32, activation='relu'),
                      layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))
network.summary()

optimizer = optimizers.Adam(lr=0.01)

for step, (x, y) in enumerate(db):

    with tf.GradientTape() as tape:
        # [b, 28, 28] => [b, 784]
        x = tf.reshape(x, (-1, 28 * 28))
        # [b, 784] => [b, 10]
        out = network(x)
        # [b] => [b, 10]
        y_onehot = tf.one_hot(y, depth=10)
        # [b]
        loss = tf.reduce_mean(tf.losses.categorical_crossentropy(y_onehot, out, from_logits=True))

        # regularization
        loss_regularization = []
        for p in network.trainable_variables:
            # tf.nn.l2_loss(p)表示计算对应参数的L2正则
            loss_regularization.append(tf.nn.l2_loss(p))
        loss_regularization = tf.reduce_sum(tf.stack(loss_regularization))

        loss = loss + 0.0001 * loss_regularization

    grads = tape.gradient(loss, network.trainable_variables)
    optimizer.apply_gradients(zip(grads, network.trainable_variables))

    if step % 100 == 0:
        print(step, 'loss:', float(loss), 'loss_regularization:', float(loss_regularization))

        # evaluate
    if step % 500 == 0:
        total, total_correct = 0., 0

        for step, (x, y) in enumerate(ds_val):
            # [b, 28, 28] => [b, 784]
            x = tf.reshape(x, (-1, 28 * 28))
            # [b, 784] => [b, 10]
            out = network(x)
            # [b, 10] => [b] 
            pred = tf.argmax(out, axis=1)
            pred = tf.cast(pred, dtype=tf.int32)
            # bool type 
            correct = tf.equal(pred, y)
            # bool tensor => int tensor => numpy
            total_correct += tf.reduce_sum(tf.cast(correct, dtype=tf.int32)).numpy()
            total += x.shape[0]

        print(step, 'Evaluate Acc:', total_correct / total)
datasets: (60000, 28, 28) (60000,) 0 255
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                multiple                  200960    
_________________________________________________________________
dense_1 (Dense)              multiple                  32896     
_________________________________________________________________
dense_2 (Dense)              multiple                  8256      
_________________________________________________________________
dense_3 (Dense)              multiple                  2080      
_________________________________________________________________
dense_4 (Dense)              multiple                  330       
=================================================================
Total params: 244,522
Trainable params: 244,522
Non-trainable params: 0
_________________________________________________________________
0 loss: 2.3507652282714844 loss_regularization: 350.3747863769531
78 Evaluate Acc: 0.2855
100 loss: 0.36240220069885254 loss_regularization: 589.3790893554688
200 loss: 0.5004981756210327 loss_regularization: 682.1327514648438
300 loss: 0.21314147114753723 loss_regularization: 775.0255737304688
400 loss: 0.27983567118644714 loss_regularization: 828.80859375
500 loss: 0.2916567325592041 loss_regularization: 878.8491821289062
78 Evaluate Acc: 0.9542
600 loss: 0.32055604457855225 loss_regularization: 904.9992065429688
700 loss: 0.1594236195087433 loss_regularization: 942.77978515625
800 loss: 0.221163809299469 loss_regularization: 960.769287109375
900 loss: 0.22233238816261292 loss_regularization: 963.0105590820312
1000 loss: 0.14087724685668945 loss_regularization: 990.5718994140625
78 Evaluate Acc: 0.9661
1100 loss: 0.24112433195114136 loss_regularization: 1028.871826171875
1200 loss: 0.3315228819847107 loss_regularization: 1044.771728515625
1300 loss: 0.25928544998168945 loss_regularization: 1086.576904296875
1400 loss: 0.22301778197288513 loss_regularization: 1110.1475830078125
1500 loss: 0.3217935562133789 loss_regularization: 1083.4852294921875
78 Evaluate Acc: 0.9599
1600 loss: 0.19745567440986633 loss_regularization: 1097.128662109375
1700 loss: 0.3913511633872986 loss_regularization: 1100.0284423828125
1800 loss: 0.24843886494636536 loss_regularization: 1143.0257568359375
1900 loss: 0.28508129715919495 loss_regularization: 1125.083251953125
2000 loss: 0.2747941315174103 loss_regularization: 1099.05908203125
78 Evaluate Acc: 0.9645
2100 loss: 0.18370205163955688 loss_regularization: 1083.613037109375
2200 loss: 0.24575147032737732 loss_regularization: 1096.5216064453125
2300 loss: 0.2671639323234558 loss_regularization: 1119.0755615234375
2400 loss: 0.17508020997047424 loss_regularization: 1075.147216796875
2500 loss: 0.20603394508361816 loss_regularization: 1099.5045166015625
78 Evaluate Acc: 0.9666
2600 loss: 0.20938491821289062 loss_regularization: 1063.99755859375
2700 loss: 0.33030807971954346 loss_regularization: 1058.947265625
2800 loss: 0.2951526343822479 loss_regularization: 1092.590087890625
2900 loss: 0.37690189480781555 loss_regularization: 1113.8955078125
3000 loss: 0.39653170108795166 loss_regularization: 1164.5491943359375
78 Evaluate Acc: 0.9688
3100 loss: 0.3081352710723877 loss_regularization: 1098.758056640625
3200 loss: 0.31366774439811707 loss_regularization: 1121.0487060546875
3300 loss: 0.3593229353427887 loss_regularization: 1124.6781005859375
3400 loss: 0.1733701378107071 loss_regularization: 1143.44140625
3500 loss: 0.2331463098526001 loss_regularization: 1137.5433349609375
78 Evaluate Acc: 0.967
3600 loss: 0.23532089591026306 loss_regularization: 1074.432861328125
3700 loss: 0.19450485706329346 loss_regularization: 1079.4312744140625
3800 loss: 0.15056748688220978 loss_regularization: 1108.001953125
3900 loss: 0.28273844718933105 loss_regularization: 1071.671142578125
4000 loss: 0.15014755725860596 loss_regularization: 1081.7843017578125
78 Evaluate Acc: 0.971
4100 loss: 0.1769871711730957 loss_regularization: 1120.951904296875
4200 loss: 0.21285438537597656 loss_regularization: 1044.5946044921875
4300 loss: 0.2390756756067276 loss_regularization: 1046.773681640625
4400 loss: 0.20340555906295776 loss_regularization: 1036.9803466796875
4500 loss: 0.1344645917415619 loss_regularization: 1021.6719970703125
78 Evaluate Acc: 0.9652
4600 loss: 0.23330935835838318 loss_regularization: 1026.7904052734375
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值