四十一.ResNet18

该博客介绍了如何使用TensorFlow实现ResNet18网络,对CIFAR10数据集进行预处理,包括数据归一化,并详细展示了ResNetBlock和ResNet18模型的搭建过程。通过训练,模型在验证集上取得了较好的准确率,显示出ResNet在图像分类任务上的有效性和实用性。
摘要由CSDN通过智能技术生成

1.数据预处理

(1)数据预览

训练集和测试集一共60000个样本,每个样本都是 32 × 32 × 3 32\times 32\times 3 32×32×3的RGB图像,样本值0-255,标签有10种类别。

from tensorflow.keras.datasets import cifar10
from collections import Counter
import numpy as np
(x_train,y_train),(x_test,y_test)=cifar10.load_data()
print(x_train.shape,x_test.shape)
print(y_train.shape)
print('Max:%d,Min:%d'%(x_train.max(),x_train.min()))
print(Counter(np.squeeze(y_train)))
输出:
(50000, 32, 32, 3) (10000, 32, 32, 3)
(50000, 1)
Max:255,Min:0
Counter({6: 5000, 9: 5000, 4: 5000, 1: 5000, 2: 5000, 7: 5000, 8: 5000, 3: 5000, 5: 5000, 0: 5000})

(2)归一化处理

x_train,x_test=x_train/255.,x_test/255.
print('Max:%f,Min:%f'%(x_train.max(),x_train.min()))
输出:
Max:1.000000,Min:0.000000

2.预测

(1)搭建残差块

from tensorflow.keras import Model
from tensorflow.keras.layers import Conv2D,BatchNormalization,Activation
class ResnetBlock(Model):
    def __init__(self,filters,strides=1,residual_path=False):
        super(ResnetBlock,self).__init__()
        self.filters = filters
        self.strides = strides
        self.residual_path = residual_path
        
        self.c1 = Conv2D(filters,kernel_size=(3,3),strides=strides,padding='same',use_bias=False)
        self.b1 = BatchNormalization()
        self.a1 = Activation('relu')
        
        self.c2 = Conv2D(filters,kernel_size=(3,3),strides=1,padding='same',use_bias=False)
        self.b2 = BatchNormalization()
        
        #如果尺寸不同,对输入进行下采样,调整尺寸
        if self.residual_path:
            self.down_c1 = Conv2D(filters,kernel_size=(1,1),strides=strides,padding='same',use_bias=False)
            self.down_b1 = BatchNormalization()
        
        self.a2 = Activation('relu')
    def call(self,inputs):
        residual = inputs#记录原始输入
        x = self.c1(inputs)
        x = self.b1(x)
        x = self.a1(x)
        
        x = self.c2(x)
        y = self.b2(x)
        if self.residual_path:
            residual = self.down_b1(inputs)
            residual = self.down_c1(residual)
        out = self.a2(y+residual)
        return out

(2)搭建ResNet18网络

from tensorflow.keras.layers import Dense,GlobalAveragePooling2D
class ResNet18(Model):
    def __init__(self,block_list,initial_filters=64):# block_list表示每个block有几个卷积层
        super(ResNet18,self).__init__()
        self.num_block = len(block_list)#block的长度代表有几个残差块
        self.block_list = block_list
        self.out_filters = initial_filters#默认卷积核为64
        self.c1 = Conv2D(filters=self.out_filters,kernel_size=(3,3),strides=1,padding='same',use_bias=False)
        self.b1 = BatchNormalization()
        self.a1 = Activation('relu')
        self.blocks = tf.keras.models.Sequential()
        #搭建残差块
        for block_id in range(len(block_list)):#第block_id个残差块
            for layer_id in range(block_list[block_id]):#第几个卷积层
                if block_id!=0 and layer_id==0:#除了第一个残差块,其余残差块都要进行下采样
                    block = ResnetBlock(self.out_filters,strides=2,residual_path=True)
                else:
                    block = ResnetBlock(self.out_filters,residual_path=False)
                self.blocks.add(block)# 将构建好的block加入resnet
            self.out_filters*=2# 下一个block的卷积核数是上一个block的2倍
        self.p1 = GlobalAveragePooling2D()#GAP层,只保留数据的最后一个维度,其它维度取平均
        self.f1 = Dense(10,activation='softmax',kernel_regularizer=tf.keras.regularizers.l2())
    def call(self,inputs):
        x = self.c1(inputs)
        x = self.b1(x)
        x = self.a1(x)
        x = self.blocks(x)
        x = self.p1(x)
        y = self.f1(x)
        return y

(3)预测

import tensorflow as tf
import os
model = ResNet18([2,2,2,2])
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
              metrics=['sparse_categorical_accuracy'])
checkpoint_save_path = "./ResnetCheckpoint/ResNet18.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):
    print('-------------load the model-----------------')
    model.load_weights(checkpoint_save_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,
                                                 save_weights_only=True,
                                                 save_best_only=True)
history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1,
                    callbacks=[cp_callback])

输出:

Epoch 1/5
1563/1563 [==============================] - 149s 95ms/step - loss: 1.3361 - sparse_categorical_accuracy: 0.5597 - val_loss: 0.9588 - val_sparse_categorical_accuracy: 0.6894
Epoch 2/5
1563/1563 [==============================] - 148s 95ms/step - loss: 0.7896 - sparse_categorical_accuracy: 0.7382 - val_loss: 0.7894 - val_sparse_categorical_accuracy: 0.7345
Epoch 3/5
1563/1563 [==============================] - 147s 94ms/step - loss: 0.6104 - sparse_categorical_accuracy: 0.8006 - val_loss: 0.6853 - val_sparse_categorical_accuracy: 0.7775
Epoch 4/5
1563/1563 [==============================] - 147s 94ms/step - loss: 0.4816 - sparse_categorical_accuracy: 0.8453 - val_loss: 0.7749 - val_sparse_categorical_accuracy: 0.7463
Epoch 5/5
1563/1563 [==============================] - 147s 94ms/step - loss: 0.3743 - sparse_categorical_accuracy: 0.8837 - val_loss: 0.6378 - val_sparse_categorical_accuracy: 0.8065
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值