VGG实战

在这里插入图片描述
卷积层参数:
卷积层 c o n v 3 conv3 conv3 表示卷积核尺寸为 3 × 3 3\times3 3×3 c o n v 3 − 64 conv3-64 conv364 表示卷积后通道数为 64 64 64,所有卷积步长 s t r i d e = 1 stride=1 stride=1,填充方式 p a d d i n g = s a m e padding=same padding=same(即输出大小和输入大小一致),这样卷积操作不会改变图像大小,仅改变通道数。

池化层参数:
池化层 m a x p o o l i n g maxpooling maxpooling 的参数均为 2 × 2 2\times2 2×2,步长 s t r i d e = 2 stride=2 stride=2
在这里插入图片描述

import tensorflow as tf
import tensorflow.keras as keras
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.keras import datasets, layers, optimizers, models, regularizers
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

def normalize(x_train, x_test):
    x_train = x_train / 255.
    x_test = x_test / 255.

    mean = np.mean(x_train, axis=(0, 1, 2, 3))
    std = np.std(x_train, axis=(0, 1, 2, 3))
    print('mean:', mean, 'std', std)
    x_train = (x_train -  mean) / (std + 1e-7)
    x_test = (x_test- mean) / (std + 1e-7)
    return x_train, x_test

def preprocess(x, y):
    x = tf.cast(x, tf.float32)
    y = tf.cast(y, tf.int32)
    y = tf.squeeze(y, axis=1)
    y = tf.one_hot(y, depth=10)
    return x, y

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x_train, x_test = normalize(x_train, x_test)

train_db = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_db = train_db.shuffle(50000).batch(128).map(preprocess)

test_db = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_db = test_db.shuffle(10000).batch(128).map(preprocess)

num_classes = 10 # imggenet数据集,这里是1000

model = models.Sequential()

model.add(layers.Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(layers.Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(layers.Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(layers.Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(layers.Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(layers.Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(layers.Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(layers.Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(layers.Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(layers.Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(layers.Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(layers.Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(layers.Conv2D(512, (3, 3), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu')) # VGG 16 为4096
model.add(layers.Dense(128, activation='relu')) # VGG 16 为4096
model.add(layers.Dense(num_classes, activation='softmax')) # VGG 16 为1000

model.build(input_shape=(None, 32, 32, 3))
model.compile(optimizer=keras.optimizers.Adam(0.0001),
              loss=keras.losses.CategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])
history = model.fit(train_db, epochs=50)

plt.plot(history.history['loss'])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.show()

model.evaluate(test_db)

这样的准确率最后仅为0.13左右,是很不理想的,所以,在每一层加入归一化和正则化操作,如下面代码,之后的准确率在0.8左右,就可以接受了,当然还可以再优化。

weight_decay = 0.000
model.add(layers.Conv2D(64, (3, 3),
          padding='same',
          kernel_regularizer=regularizers.l2(weight_decay)))
model.add(layers.Activation('relu'))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.3)) #在每一个block的第一次卷积加入dropout
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值