tensorflow 之 bn - keras

目录

关于karas的修改:

1.BN的好处

1.1尽快收敛

1.2可以设置较大的学习率

1.3初始化要求不严格

1.4可能会给与更好的结果

2.keras中BN的修改点

2.1全连接层

2.2卷积层

3.全部代码

4.训练结果差别

train without bn(下图)

train_with_bn(下图)

5.全部代码:​


关于karas的修改:

https://www.dlology.com/blog/one-simple-trick-to-train-keras-model-faster-with-batch-normalization/

1.BN的好处

1.1尽快收敛

1.2可以设置较大的学习率

1.3初始化要求不严格

1.4可能会给与更好的结果

2.keras中BN的修改点

2.1全连接层

2.2卷积层


 

3.全部代码

4.训练结果差别

train without bn(下图)

train_with_bn(下图)

5.全部代码:

 

 

以下是BN-Inception的代码示例,供参考: ```python import tensorflow as tf from tensorflow.keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, Dense, Flatten, Input, concatenate, Dropout, BatchNormalization def inception_module(x, filters): f1, f3r, f3, f5r, f5, fpp = filters conv1 = Conv2D(f1, (1, 1), padding='same', activation='relu')(x) conv3r = Conv2D(f3r, (1, 1), padding='same', activation='relu')(x) conv3 = Conv2D(f3, (3, 3), padding='same', activation='relu')(conv3r) conv5r = Conv2D(f5r, (1, 1), padding='same', activation='relu')(x) conv5 = Conv2D(f5, (5, 5), padding='same', activation='relu')(conv5r) pool = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(x) convpp = Conv2D(fpp, (1, 1), padding='same', activation='relu')(pool) output = concatenate([conv1, conv3, conv5, convpp], axis=-1) return output def BN_Inception(): input_layer = Input(shape=(224, 224, 3)) x = Conv2D(64, (7, 7), strides=(2, 2), padding='same', activation='relu')(input_layer) x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) x = BatchNormalization()(x) x = Conv2D(64, (1, 1), padding='same', activation='relu')(x) x = Conv2D(192, (3, 3), padding='same', activation='relu')(x) x = BatchNormalization()(x) x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) x = inception_module(x, [64, 96, 128, 16, 32, 32]) x = inception_module(x, [128, 128, 192, 32, 96, 64]) x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) x = inception_module(x, [192, 96, 208, 16, 48, 64]) x = inception_module(x, [160, 112, 224, 24, 64, 64]) x = inception_module(x, [128, 128, 256, 24, 64, 64]) x = inception_module(x, [112, 144, 288, 32, 64, 64]) x = inception_module(x, [256, 160, 320, 32, 128, 128]) x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) x = inception_module(x, [256, 160, 320, 32, 128, 128]) x = inception_module(x, [384, 192, 384, 48, 128, 128]) x = AveragePooling2D((7, 7))(x) x = Flatten()(x) x = Dropout(0.4)(x) output_layer = Dense(1000, activation='softmax')(x) model = tf.keras.Model(inputs=input_layer, outputs=output_layer) return model ``` 以上代码实现了BN-Inception的网络结构,包含多个Inception模块和批量归一化层。其中,inception_module()函数实现了Inception模块的结构,BN_Inception()函数则定义了整个网络的结构,包括输入层、多个Inception模块、输出层等。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值