CNN网络简单优化,以VGG16为例

优化常用经验方法

1、调低学习率(或按迭代次数衰减)
2、调整参数的初始化方法
3、调整输入数据的标准化方法
4、修改Loss函数
5、增加正则化
6、使用BN/GN层(中间层数据的标准化)
7、使用dropout

VGG16网络结构

以下代码仅对第一个Conv2D进行了调整,增加了L2正则化,BN层与Dropout层,后面的的Conv2D均可进行相应调整

num_classes = 10
weight_decay = 0.000

model = keras.models.Sequential()

# 优化 增加L2正则化
model.add(keras.layers.Conv2D(64, (3, 3), padding='same', kernel_regularizer=keras.regularizers.l2(weight_decay)))
model.add(keras.layers.Activation('relu'))
# 优化 添加BN层和Dropout
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dropout(0.3))

model.add(keras.layers.Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(keras.layers.Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(keras.layers.Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(keras.layers.Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(keras.layers.Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(keras.layers.Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(keras.layers.Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(keras.layers.Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(keras.layers.Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(256, activation='relu'))  # VGG16为4096
model.add(keras.layers.Dense(128, activation='relu'))  # VGG16为4096
model.add(keras.layers.Dense(num_classes, activation='softmax'))  # VGG16为1000
  • 3
    点赞
  • 37
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
这里提供一份以VGG16为基础网络的锚框细化模块代码,供参考: ```python import tensorflow as tf def anchor_refinement_module(input_tensor, num_anchors=9): """ Anchor Refinement Module :param input_tensor: input tensor :param num_anchors: number of anchors :return: output tensor """ # Convolutional Layer 1 x = tf.layers.conv2d(input_tensor, filters=256, kernel_size=(3, 3), padding='same', activation=tf.nn.relu) x = tf.layers.batch_normalization(x) x = tf.layers.max_pooling2d(x, pool_size=(2, 2), strides=(2, 2), padding='same') # Convolutional Layer 2 x = tf.layers.conv2d(x, filters=256, kernel_size=(3, 3), padding='same', activation=tf.nn.relu) x = tf.layers.batch_normalization(x) x = tf.layers.max_pooling2d(x, pool_size=(2, 2), strides=(2, 2), padding='same') # Convolutional Layer 3 x = tf.layers.conv2d(x, filters=256, kernel_size=(3, 3), padding='same', activation=tf.nn.relu) x = tf.layers.batch_normalization(x) x = tf.layers.max_pooling2d(x, pool_size=(2, 2), strides=(2, 2), padding='same') # Convolutional Layer 4 x = tf.layers.conv2d(x, filters=256, kernel_size=(3, 3), padding='same', activation=tf.nn.relu) x = tf.layers.batch_normalization(x) # 1x1 Convolution for Classification cls = tf.layers.conv2d(x, filters=num_anchors * 2, kernel_size=(1, 1), padding='same', activation=None) # 1x1 Convolution for Regression reg = tf.layers.conv2d(x, filters=num_anchors * 4, kernel_size=(1, 1), padding='same', activation=None) return cls, reg ``` 以上代码中,我们定义了一个名为anchor_refinement_module的函数,它接受一个输入张量和一个锚框数量作为参数,并返回两个输出张量,一个用于分类,一个用于回归。 该函数的实现基于VGG16网络结构,并在其基础上添加了4个卷积层和2个1x1卷积层来生成分类和回归输出。分类输出有num_anchors * 2个通道,用于预测每个锚框的正/负标签,回归输出有num_anchors * 4个通道,用于预测每个锚框的坐标偏移量。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值