第T9周:猫狗识别2

 >- **🍨 本文为[🔗365天深度学习训练营]中的学习记录博客**
>- **🍖 原作者:[K同学啊]**


📌第9周:猫狗识别-2📌

  • 难度:夯实基础⭐⭐
  • 语言:Python3、TensorFlow2

🍺 要求:

  1. 找到并处理第8周的程序问题(本文给出了答案)

🍻 拔高(可选):

  1. 请尝试增加数据增强部分内容以提高准确率
  2. 可以使用哪些方式进行数据增强?(下一周给出了答案)

🔎 探索(难度有点大)

  1. 本文中的代码存在较大赘余,请对代码进行精简

🚀我的环境:

  • 语言环境:Python3.11.7
  • 编译器:jupyter notebook
  • 深度学习框架:TensorFlow2.13.0

一、前期工作

1. 设置GPU

如果使用的是CPU可以注释掉这部分的代码。

import tensorflow as tf

gpus=tf.config.list_physical_devices("GPU")

if gpus:
    tf.config.experimental.set_memory_growth(gpus[0],True)
    tf.config.set_visible_devices([gpus[0]],"GPU")
    
#打印显卡信息,确认GPU可用
print(gpus)

2. 导入数据

import warnings
warnings.filterwarnings('ignore')

import pathlib
data_dir="D:\THE MNIST DATABASE\T8"
data_dir=pathlib.Path(data_dir)

image_count=len(list(data_dir.glob('*/*')))

print("图片总数为:",image_count)

 运行结果:

图片总数为: 3400

二、数据预处理

1. 加载数据

使用image_dataset_from_directory方法将磁盘中的数据加载到tf.data.Dataset

加载训练集:

train_ds=tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=12,image_size=(224,224),
    batch_size=8
)

 运行结果:

Found 3400 files belonging to 2 classes.
Using 2720 files for training.

加载验证集:

val_ds=tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=12,
    image_size=(224,224),
    batch_size=8
)

 运行结果:

Found 3400 files belonging to 2 classes.
Using 680 files for validation.

通过class_names输出数据集的标签。标签将按字母顺序对应于目录名称。

class_names=train_ds.class_names
print(class_names)

 运行结果:

['cat', 'dog']

2. 再次检查数据

for image_batch,labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break

运行结果:

(8, 224, 224, 3)
(8,)
  • Image_batch是形状的张量(8, 224, 224, 3)。这是一批形状224x224x3的8张图片(最后一维指的是彩色通道RGB)。
  • Label_batch是形状(8,)的张量,这些标签对应8张图片

3. 配置数据集

  • prefetch() :预取数据,加速运行,其详细介绍可以参考我前两篇文章,里面都有讲解。
  • cache() :将数据集缓存到内存当中,加速运行
AUTOTUNE=tf.data.AUTOTUNE

def preprocess_image(image,label):
    return (image/255.0,label)

#归一化处理
train_ds=train_ds.map(preprocess_image,num_parallel_calls=AUTOTUNE)
val_ds=val_ds.map(preprocess_image,num_parallel_calls=AUTOTUNE)

train_ds=train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds=val_ds.cache().prefetch(buffer_size=AUTOTUNE)

如果报 AttributeError: module 'tensorflow._api.v2.data' has no attribute 'AUTOTUNE' 错误,就将 AUTOTUNE = tf.data.AUTOTUNE 更换为 AUTOTUNE = tf.data.experimental.AUTOTUNE,这个错误是由于版本问题引起的。

4. 可视化数据

import matplotlib.pyplot as plt

plt.figure(figsize=(15,10))

for images,labels in train_ds.take(1):
    for i in range(8):
        
        ax=plt.subplot(5,8,i+1)
        plt.imshow(images[i])
        plt.title(class_names[labels[i]])
        
        plt.axis("off")

 运行结果:

三、构建VG-16网络

VGG优缺点分析:

  • VGG优点

VGG的结构非常简洁,整个网络都使用了同样大小的卷积核尺寸(3x3)和最大池化尺寸(2x2)。

  • VGG缺点

1)训练时间过长,调参难度大。2)需要的存储容量大,不利于部署。例如存储VGG-16权重值文件的大小为500多MB,不利于安装到嵌入式系统中

结构说明:

●13个卷积层(Convolutional Layer),分别用blockX_convX表示
●3个全连接层(Fully connected Layer),分别用fcX与predictions表示
●5个池化层(Pool layer),分别用blockX_pool表示

VGG-16包含了16个隐藏层(13个卷积层和3个全连接层),故称为VGG-16

from tensorflow.keras import layers,models,Input
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Conv2D,MaxPooling2D,Dense,Flatten,Dropout

def vgg16(nb_classes,input_shape):
    input_tensor=Input(shape=input_shape)
    #1st block
    x=Conv2D(64,(3,3),activation='relu',padding='same')(input_tensor)
    x=Conv2D(64,(3,3),activation='relu',padding='same')(x)
    x=MaxPooling2D((2,2),strides=(2,2))(x)
    #2nd block
    x=Conv2D(128,(3,3),activation='relu',padding='same')(x)
    x=Conv2D(128,(3,3),activation='relu',padding='same')(x)
    x=MaxPooling2D((2,2),strides=(2,2))(x)
    #3rd block
    x=Conv2D(256,(3,3),activation='relu',padding='same')(x)
    x=Conv2D(256,(3,3),activation='relu',padding='same')(x)
    x=Conv2D(256,(3,3),activation='relu',padding='same')(x)
    x=MaxPooling2D((2,2),strides=(2,2))(x)
    #4th block
    x=Conv2D(512,(3,3),activation='relu',padding='same')(x)
    x=Conv2D(512,(3,3),activation='relu',padding='same')(x)
    x=Conv2D(512,(3,3),activation='relu',padding='same')(x)
    x=MaxPooling2D((2,2),strides=(2,2))(x)
    #5th block
    x=Conv2D(512,(3,3),activation='relu',padding='same')(x)
    x=Conv2D(512,(3,3),activation='relu',padding='same')(x)
    x=Conv2D(512,(3,3),activation='relu',padding='same')(x)
    x=MaxPooling2D((2,2),strides=(2,2))(x)
    #full connection
    x=Flatten()(x)
    x=Dense(4096,activation='relu')(x)
    x=Dense(4096,activation='relu')(x)
    output_tensor=Dense(nb_classes,activation='softmax')(x)
    
    model=Model(input_tensor,output_tensor)
    return model

model=vgg16(1000,(224,224,3))
model.summary()

首先,尝试模型在没有轻量化的情况下进行运行,留待后面查看运行结果

Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 224, 224, 3)]     0         
                                                                 
 conv2d (Conv2D)             (None, 224, 224, 64)      1792      
                                                                 
 conv2d_1 (Conv2D)           (None, 224, 224, 64)      36928     
                                                                 
 max_pooling2d (MaxPooling2  (None, 112, 112, 64)      0         
 D)                                                              
                                                                 
 conv2d_2 (Conv2D)           (None, 112, 112, 128)     73856     
                                                                 
 conv2d_3 (Conv2D)           (None, 112, 112, 128)     147584    
                                                                 
 max_pooling2d_1 (MaxPoolin  (None, 56, 56, 128)       0         
 g2D)                                                            
                                                                 
 conv2d_4 (Conv2D)           (None, 56, 56, 256)       295168    
                                                                 
 conv2d_5 (Conv2D)           (None, 56, 56, 256)       590080    
                                                                 
 conv2d_6 (Conv2D)           (None, 56, 56, 256)       590080    
                                                                 
 max_pooling2d_2 (MaxPoolin  (None, 28, 28, 256)       0         
 g2D)                                                            
                                                                 
 conv2d_7 (Conv2D)           (None, 28, 28, 512)       1180160   
                                                                 
 conv2d_8 (Conv2D)           (None, 28, 28, 512)       2359808   
                                                                 
 conv2d_9 (Conv2D)           (None, 28, 28, 512)       2359808   
                                                                 
 max_pooling2d_3 (MaxPoolin  (None, 14, 14, 512)       0         
 g2D)                                                            
                                                                 
 conv2d_10 (Conv2D)          (None, 14, 14, 512)       2359808   
                                                                 
 conv2d_11 (Conv2D)          (None, 14, 14, 512)       2359808   
                                                                 
 conv2d_12 (Conv2D)          (None, 14, 14, 512)       2359808   
                                                                 
 max_pooling2d_4 (MaxPoolin  (None, 7, 7, 512)         0         
 g2D)                                                            
                                                                 
 flatten (Flatten)           (None, 25088)             0         
                                                                 
 dense (Dense)               (None, 4096)              102764544 
                                                                 
 dense_1 (Dense)             (None, 4096)              16781312  
                                                                 
 dense_2 (Dense)             (None, 1000)              4097000   
                                                                 
=================================================================
Total params: 138357544 (527.79 MB)
Trainable params: 138357544 (527.79 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

四、编译

在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:

  • 损失函数(loss):用于衡量模型在训练期间的准确率。
  • 优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
  • 评价函数(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
model.compile(optimizer="adam",
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy']
)

五、训练模型

修改了在T8中出现的问题,在T8中,每一轮次的训练后,都会将history的值赋给loss和accuracy,故每次记录的结果都是相同的。在本次模型中,将每轮loss和accuracy的值添加到一个列表中(即loss.append(history)),保存了所有的历史记录。但本次显示的结果是所有loss和accuracy的平均值(即np.mean())

from tqdm import tqdm
import tensorflow.keras.backend as K
import numpy as np

epochs=10
lr=1e-4

#记录训练数据,方便后面的分析
history_train_loss=[]
history_train_accuracy=[]
history_val_loss=[]
history_val_accuracy=[]

for epoch in range(epochs):
    train_total=len(train_ds)
    val_total=len(val_ds)
    
    """
    total:预期迭代数目
    ncols:控制进度条宽度
    mininterval:进度更新最小间隔,以秒为单位(默认值:0.1)
    """
    with tqdm(total=train_total,desc=f'Epoch {epoch+1}/{epochs}',mininterval=1,ncols=100) as pbar:
        
        lr=lr*0.92
        K.set_value(model.optimizer.lr,lr)
        
        #注:此处与T8不同
        train_loss=[]
        train_accuracy=[]
        
        for image,label in train_ds:
            """
            训练模型,简单理解train_on_batch就是:它是比model.fit()%更高级的一个用法
            """
            #这里生成的是每一个batch的acc与loss
            history=model.train_on_batch(image,label)
            
            #注:此处与T8不同
            train_loss.append(history[0])
            train_accuracy.append(history[1])
            
            pbar.set_postfix({"train_loss":"%.4f"%history[0],
                              "train_accuracy":"%.4f"%history[1],
                              "lr":K.get_value(model.optimizer.lr)})
            pbar.update(1)
            
        history_train_loss.append(np.mean(train_loss))
        history_train_accuracy.append(np.mean(train_accuracy))
        
    print('开始验证!')
    
    with tqdm(total=val_total,desc=f'Epoch {epoch+1}/{epochs}',mininterval=0.3,ncols=100) as pbar:
        
        #注:此处与T8不同
        val_loss=[]
        val_accuracy=[]
        for image,label in val_ds:
            
            #这里生成的是每一个batch的acc与loss
            history=model.test_on_batch(image,label)
            
            #注:此处与T8不同
            val_loss.append(history[0])
            val_accuracy.append(history[1])
            
            pbar.set_postfix({"val_loss":"%.4f"%history[0],
                              "val_accuracy":"%.4f"%history[1]})
            pbar.update(1)
        history_val_loss.append(np.mean(val_loss))
        history_val_accuracy.append(np.mean(val_accuracy))
    
    print('结束验证!')
    print("验证loss为:%.4f"%np.mean(val_loss))
    print("验证准确率为:%.4f"%np.mean(val_accuracy))

运行结果:

Epoch 1/10: 100%|█| 340/340 [23:53<00:00,  4.22s/it, train_loss=0.4389, train_accuracy=0.8750, lr=9.
开始验证!
Epoch 1/10: 100%|█████████████| 85/85 [00:52<00:00,  1.63it/s, val_loss=0.6702, val_accuracy=0.7500]
结束验证!
验证loss为:0.5361
验证准确率为:0.7265
Epoch 2/10: 100%|█| 340/340 [23:41<00:00,  4.18s/it, train_loss=0.1310, train_accuracy=1.0000, lr=8.
开始验证!
Epoch 2/10: 100%|█████████████| 85/85 [00:50<00:00,  1.69it/s, val_loss=0.1669, val_accuracy=1.0000]
结束验证!
验证loss为:0.2707
验证准确率为:0.8838
Epoch 3/10: 100%|█| 340/340 [23:43<00:00,  4.19s/it, train_loss=0.0046, train_accuracy=1.0000, lr=7.
开始验证!
Epoch 3/10: 100%|█████████████| 85/85 [00:54<00:00,  1.56it/s, val_loss=0.0030, val_accuracy=1.0000]
结束验证!
验证loss为:0.0445
验证准确率为:0.9809
Epoch 4/10: 100%|█| 340/340 [26:36<00:00,  4.69s/it, train_loss=0.0817, train_accuracy=1.0000, lr=7.
开始验证!
Epoch 4/10: 100%|█████████████| 85/85 [00:56<00:00,  1.49it/s, val_loss=0.0072, val_accuracy=1.0000]
结束验证!
验证loss为:0.0314
验证准确率为:0.9897
Epoch 5/10: 100%|█| 340/340 [26:48<00:00,  4.73s/it, train_loss=0.1259, train_accuracy=0.8750, lr=6.
开始验证!
Epoch 5/10: 100%|█████████████| 85/85 [00:55<00:00,  1.52it/s, val_loss=0.0593, val_accuracy=1.0000]
结束验证!
验证loss为:0.0458
验证准确率为:0.9824
Epoch 6/10: 100%|█| 340/340 [26:47<00:00,  4.73s/it, train_loss=0.0073, train_accuracy=1.0000, lr=6.
开始验证!
Epoch 6/10: 100%|█████████████| 85/85 [00:54<00:00,  1.57it/s, val_loss=0.1767, val_accuracy=0.8750]
结束验证!
验证loss为:0.0585
验证准确率为:0.9838
Epoch 7/10: 100%|█| 340/340 [24:45<00:00,  4.37s/it, train_loss=0.0024, train_accuracy=1.0000, lr=5.
开始验证!
Epoch 7/10: 100%|█████████████| 85/85 [00:52<00:00,  1.62it/s, val_loss=0.0038, val_accuracy=1.0000]
结束验证!
验证loss为:0.0235
验证准确率为:0.9897
Epoch 8/10: 100%|█| 340/340 [24:12<00:00,  4.27s/it, train_loss=0.0000, train_accuracy=1.0000, lr=5.
开始验证!
Epoch 8/10: 100%|█████████████| 85/85 [00:51<00:00,  1.66it/s, val_loss=0.0840, val_accuracy=1.0000]
结束验证!
验证loss为:0.0835
验证准确率为:0.9706
Epoch 9/10: 100%|█| 340/340 [23:52<00:00,  4.21s/it, train_loss=0.0021, train_accuracy=1.0000, lr=4.
开始验证!
Epoch 9/10: 100%|█████████████| 85/85 [00:50<00:00,  1.68it/s, val_loss=0.0367, val_accuracy=1.0000]
结束验证!
验证loss为:0.0238
验证准确率为:0.9912
Epoch 10/10: 100%|█| 340/340 [23:44<00:00,  4.19s/it, train_loss=0.0121, train_accuracy=1.0000, lr=4
开始验证!
Epoch 10/10: 100%|████████████| 85/85 [00:50<00:00,  1.68it/s, val_loss=0.0000, val_accuracy=1.0000]
结束验证!
验证loss为:0.0096
验证准确率为:0.9971

六、模型评估

epochs_range = range(epochs)

plt.figure(figsize=(14, 4))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, history_train_accuracy, label='Training Accuracy')
plt.plot(epochs_range, history_val_accuracy, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, history_train_loss, label='Training Loss')
plt.plot(epochs_range, history_val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

运行结果:

七、模型轻量化

由于网络原因实在无法安装上GPU版本的TensorFlow,从T6开始,模型跑起来就像蜗牛一样,本轮次的模型我完全按照vgg16的原版模型进行搭建,故而参数数量较多,模型总体大小为500MB左右。在此,我试图将模型进行轻量化,减少参数数量,削减模型大小,减少运行时间。故而在全连接层进行修改,但并不知道运行结果能否达到满意状态。修改如下:

#full connection
    x=Flatten()(x)
    x=Dense(1024,activation='relu')(x)
    x=Dense(108,activation='relu')(x)
    output_tensor=Dense(nb_classes,activation='softmax')(x)

 全连接层的参数从4096减少到1024,第二层更是减少到108。查看模型大小后结果如下:

Total params: 40625524 (154.97 MB)
Trainable params: 40625524 (154.97 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

可以看出,模型参数少了2/3左右,大小变为155MB左右。运行模型后,结果如下:

Epoch 1/10: 100%|█| 340/340 [18:11<00:00,  3.21s/it, train_loss=0.3996, train_accuracy=1.0000, lr=9.
开始验证!
Epoch 1/10: 100%|█████████████| 85/85 [00:50<00:00,  1.67it/s, val_loss=0.5278, val_accuracy=0.7500]
结束验证!
验证loss为:0.4936
验证准确率为:0.8000
Epoch 2/10: 100%|█| 340/340 [18:04<00:00,  3.19s/it, train_loss=0.2368, train_accuracy=0.8750, lr=8.
开始验证!
Epoch 2/10: 100%|█████████████| 85/85 [00:48<00:00,  1.75it/s, val_loss=0.0124, val_accuracy=1.0000]
结束验证!
验证loss为:0.1080
验证准确率为:0.9632
Epoch 3/10: 100%|█| 340/340 [18:02<00:00,  3.18s/it, train_loss=0.0012, train_accuracy=1.0000, lr=7.
开始验证!
Epoch 3/10: 100%|█████████████| 85/85 [00:48<00:00,  1.75it/s, val_loss=0.0022, val_accuracy=1.0000]
结束验证!
验证loss为:0.0615
验证准确率为:0.9779
Epoch 4/10: 100%|█| 340/340 [18:02<00:00,  3.18s/it, train_loss=0.0033, train_accuracy=1.0000, lr=7.
开始验证!
Epoch 4/10: 100%|█████████████| 85/85 [00:48<00:00,  1.75it/s, val_loss=0.1984, val_accuracy=0.8750]
结束验证!
验证loss为:0.0519
验证准确率为:0.9809
Epoch 5/10: 100%|█| 340/340 [18:01<00:00,  3.18s/it, train_loss=0.0092, train_accuracy=1.0000, lr=6.
开始验证!
Epoch 5/10: 100%|█████████████| 85/85 [00:48<00:00,  1.75it/s, val_loss=0.0003, val_accuracy=1.0000]
结束验证!
验证loss为:0.0551
验证准确率为:0.9853
Epoch 6/10: 100%|█| 340/340 [18:02<00:00,  3.18s/it, train_loss=0.0098, train_accuracy=1.0000, lr=6.
开始验证!
Epoch 6/10: 100%|█████████████| 85/85 [00:48<00:00,  1.75it/s, val_loss=0.0223, val_accuracy=1.0000]
结束验证!
验证loss为:0.0498
验证准确率为:0.9779
Epoch 7/10: 100%|█| 340/340 [18:01<00:00,  3.18s/it, train_loss=0.0164, train_accuracy=1.0000, lr=5.
开始验证!
Epoch 7/10: 100%|█████████████| 85/85 [00:48<00:00,  1.75it/s, val_loss=0.0005, val_accuracy=1.0000]
结束验证!
验证loss为:0.0324
验证准确率为:0.9868
Epoch 8/10: 100%|█| 340/340 [18:01<00:00,  3.18s/it, train_loss=0.0188, train_accuracy=1.0000, lr=5.
开始验证!
Epoch 8/10: 100%|█████████████| 85/85 [00:48<00:00,  1.75it/s, val_loss=0.0020, val_accuracy=1.0000]
结束验证!
验证loss为:0.0730
验证准确率为:0.9706
Epoch 9/10: 100%|█| 340/340 [18:03<00:00,  3.19s/it, train_loss=0.0212, train_accuracy=1.0000, lr=4.
开始验证!
Epoch 9/10: 100%|█████████████| 85/85 [00:48<00:00,  1.75it/s, val_loss=0.0001, val_accuracy=1.0000]
结束验证!
验证loss为:0.0463
验证准确率为:0.9868
Epoch 10/10: 100%|█| 340/340 [18:04<00:00,  3.19s/it, train_loss=0.0001, train_accuracy=1.0000, lr=4
开始验证!
Epoch 10/10: 100%|████████████| 85/85 [00:50<00:00,  1.68it/s, val_loss=0.0000, val_accuracy=1.0000]
结束验证!
验证loss为:0.0492
验证准确率为:0.9868

模型结果绘图如下:

 

 八、心得体会

本项目中自行搭建了vgg16模型,运行结果比较满意,在第10轮的时候达到了99.71%。但是,由于自身原因,一直是在用CPU运行模型,耗时十分严重。改进模型进行轻量化后,模型的准确率稍有下降,但也能达到98.68%,而模型体积大幅度减少,从性价比来看改进后的模型性价比较高。当然,这知识对于我这种CPU跑模型的情况来说,如果是在GPU情况下当然要追求更高的准确率。

  • 17
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值