卷积神经网络(AlexNet)鸟类识别

一、前言

我的环境:

  • 语言环境:Python3.6.5
  • 编译器:jupyter notebook
  • 深度学习环境:TensorFlow2.4.1

往期精彩内容:

来自专栏:机器学习与深度学习算法推荐

二、前期工作

1. 设置GPU(如果使用的是CPU可以忽略这步)

import tensorflow as tf

gpus = tf.config.list_physical_devices("GPU")

if gpus:
    tf.config.experimental.set_memory_growth(gpus[0], True)  #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpus[0]],"GPU")

2. 导入数据

import matplotlib.pyplot as plt
# 支持中文
plt.rcParams['font.sans-serif'] = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False  # 用来正常显示负号

import os,PIL

# 设置随机种子尽可能使结果可以重现
import numpy as np
np.random.seed(1)

# 设置随机种子尽可能使结果可以重现
import tensorflow as tf
tf.random.set_seed(1)

import pathlib
data_dir = "bird_photos"

data_dir = pathlib.Path(data_dir)

3. 查看数据

image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:",image_count)

图片总数为: 565

二、数据预处理

文件夹数量
Bananaquit166 张
Black Throated Bushtiti111 张
Black skimmer122 张
Cockatoo166张

1. 加载数据

使用image_dataset_from_directory方法将磁盘中的数据加载到tf.data.Dataset

batch_size = 8
img_height = 227
img_width = 227
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
Found 565 files belonging to 4 classes.
Using 452 files for training.
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
Found 565 files belonging to 4 classes.
Using 113 files for validation.

我们可以通过class_names输出数据集的标签。标签将按字母顺序对应于目录名称。

class_names = train_ds.class_names
print(class_names)
['Bananaquit', 'Black Skimmer', 'Black Throated Bushtiti', 'Cockatoo']

2. 可视化数据

plt.figure(figsize=(10, 5))  # 图形的宽为10高为5

for images, labels in train_ds.take(1):
    for i in range(8):
        
        ax = plt.subplot(2, 4, i + 1)  

        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[labels[i]])
        
        plt.axis("off")
plt.imshow(images[1].numpy().astype("uint8"))

3. 再次检查数据

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break
(8, 227, 227, 3)
(8,)
  • Image_batch是形状的张量(8, 224, 224, 3)。这是一批形状240x240x3的8张图片(最后一维指的是彩色通道RGB)。
  • Label_batch是形状(8,)的张量,这些标签对应8张图片

4. 配置数据集

AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

三、AlexNet (8层)介绍

AleXNet使用了ReLU方法加快训练速度,并且使用Dropout来防止过拟合

AleXNet (8层)是首次把卷积神经网络引入计算机视觉领域并取得突破性成绩的模型。获得了ILSVRC 2012年的冠军,再top-5项目中错误率仅仅15.3%,相对于使用传统方法的亚军26.2%的成绩优良重大突破。和之前的LeNet相比,AlexNet通过堆叠卷积层使得模型更深更宽。

四、构建AlexNet (8层)网络模型

from tensorflow.keras import layers, models, Input
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout,BatchNormalization,Activation

import numpy as np
seed = 7
np.random.seed(seed)

def AlexNet(nb_classes, input_shape):
    input_tensor = Input(shape=input_shape)
    # 1st block
    x = Conv2D(96, (11,11), strides=4, name='block1_conv1')(input_tensor)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = MaxPooling2D((3,3), strides=2, name = 'block1_pool')(x)
    
    # 2nd block
    x = Conv2D(256, (5,5), padding='same', name='block2_conv1')(x)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = MaxPooling2D((3,3), strides=2, name='block2_pool')(x)
    
    # 3rd block
    x = Conv2D(384, (3,3), activation='relu', padding='same',name='block3_conv1')(x)
    # 4th block
    x = Conv2D(384, (3,3), activation='relu', padding='same',name='block4_conv1')(x)
    
    # 5th block
    x = Conv2D(256, (3,3), activation='relu', padding='same',name='block5_conv1')(x)
    x = MaxPooling2D((3,3), strides=2, name = 'block5_pool')(x)
    
    # full connection
    x = Flatten()(x)
    x = Dense(4096, activation='relu',  name='fc1')(x)
    x = Dropout(0.5)(x)
    x = Dense(4096, activation='relu', name='fc2')(x)
    x = Dropout(0.5)(x)
    output_tensor = Dense(nb_classes, activation='softmax', name='predictions')(x)

    model = Model(input_tensor, output_tensor)
    return model

model=AlexNet(1000, (img_width, img_height, 3))
model.summary()
Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 227, 227, 3)]     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 55, 55, 96)        34944     
_________________________________________________________________
batch_normalization (BatchNo (None, 55, 55, 96)        384       
_________________________________________________________________
activation (Activation)      (None, 55, 55, 96)        0         
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 27, 27, 96)        0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 27, 27, 256)       614656    
_________________________________________________________________
batch_normalization_1 (Batch (None, 27, 27, 256)       1024      
_________________________________________________________________
activation_1 (Activation)    (None, 27, 27, 256)       0         
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 13, 13, 256)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 13, 13, 384)       885120    
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 13, 13, 384)       1327488   
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 13, 13, 256)       884992    
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 6, 6, 256)         0         
_________________________________________________________________
flatten (Flatten)            (None, 9216)              0         
_________________________________________________________________
fc1 (Dense)                  (None, 4096)              37752832  
_________________________________________________________________
dropout (Dropout)            (None, 4096)              0         
_________________________________________________________________
fc2 (Dense)                  (None, 4096)              16781312  
_________________________________________________________________
dropout_1 (Dropout)          (None, 4096)              0         
_________________________________________________________________
predictions (Dense)          (None, 1000)              4097000   
=================================================================
Total params: 62,379,752
Trainable params: 62,379,048
Non-trainable params: 704
_________________________________________________________________

五、编译

在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:

  • 损失函数(loss):用于衡量模型在训练期间的准确率。
  • 优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
  • 指标(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
# 设置优化器,我这里改变了学习率。
# opt = tf.keras.optimizers.Adam(learning_rate=1e-7)

model.compile(optimizer="adam",
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

六、训练模型

epochs = 20

history = model.fit(
    train_ds,
    validation_data=val_ds,
    epochs=epochs
)
Epoch 1/20
57/57 [==============================] - 5s 30ms/step - loss: 9.2789 - accuracy: 0.2166 - val_loss: 3.2340 - val_accuracy: 0.3363
Epoch 2/20
57/57 [==============================] - 1s 14ms/step - loss: 0.9329 - accuracy: 0.6224 - val_loss: 1.1778 - val_accuracy: 0.5310
Epoch 3/20
57/57 [==============================] - 1s 14ms/step - loss: 0.7438 - accuracy: 0.6747 - val_loss: 1.9651 - val_accuracy: 0.5133
Epoch 4/20
57/57 [==============================] - 1s 14ms/step - loss: 0.8875 - accuracy: 0.7025 - val_loss: 1.5589 - val_accuracy: 0.4602
Epoch 5/20
57/57 [==============================] - 1s 14ms/step - loss: 0.6116 - accuracy: 0.7424 - val_loss: 0.9914 - val_accuracy: 0.4956
Epoch 6/20
57/57 [==============================] - 1s 15ms/step - loss: 0.6258 - accuracy: 0.7520 - val_loss: 1.1103 - val_accuracy: 0.5221
Epoch 7/20
57/57 [==============================] - 1s 13ms/step - loss: 0.5138 - accuracy: 0.8034 - val_loss: 0.7832 - val_accuracy: 0.6726
Epoch 8/20
57/57 [==============================] - 1s 14ms/step - loss: 0.5343 - accuracy: 0.7940 - val_loss: 6.1064 - val_accuracy: 0.4602
Epoch 9/20
57/57 [==============================] - 1s 14ms/step - loss: 0.8667 - accuracy: 0.7606 - val_loss: 0.6869 - val_accuracy: 0.7965
Epoch 10/20
57/57 [==============================] - 1s 16ms/step - loss: 0.5785 - accuracy: 0.8141 - val_loss: 1.3631 - val_accuracy: 0.5310
Epoch 11/20
57/57 [==============================] - 1s 15ms/step - loss: 0.4929 - accuracy: 0.8109 - val_loss: 0.7191 - val_accuracy: 0.7345
Epoch 12/20
57/57 [==============================] - 1s 15ms/step - loss: 0.4141 - accuracy: 0.8507 - val_loss: 0.4962 - val_accuracy: 0.8496
Epoch 13/20
57/57 [==============================] - 1s 15ms/step - loss: 0.2591 - accuracy: 0.9148 - val_loss: 0.8015 - val_accuracy: 0.8053
Epoch 14/20
57/57 [==============================] - 1s 15ms/step - loss: 0.2683 - accuracy: 0.9079 - val_loss: 0.5451 - val_accuracy: 0.8142
Epoch 15/20
57/57 [==============================] - 1s 14ms/step - loss: 0.2925 - accuracy: 0.9096 - val_loss: 0.6668 - val_accuracy: 0.8584
Epoch 16/20
57/57 [==============================] - 1s 14ms/step - loss: 0.4009 - accuracy: 0.8804 - val_loss: 1.1609 - val_accuracy: 0.6372
Epoch 17/20
57/57 [==============================] - 1s 14ms/step - loss: 0.4375 - accuracy: 0.8446 - val_loss: 0.9854 - val_accuracy: 0.7965
Epoch 18/20
57/57 [==============================] - 1s 14ms/step - loss: 0.3085 - accuracy: 0.8926 - val_loss: 0.6477 - val_accuracy: 0.8761
Epoch 19/20
57/57 [==============================] - 1s 15ms/step - loss: 0.1200 - accuracy: 0.9538 - val_loss: 1.8996 - val_accuracy: 0.5398
Epoch 20/20
57/57 [==============================] - 1s 15ms/step - loss: 0.3378 - accuracy: 0.9095 - val_loss: 0.9337 - val_accuracy: 0.8053

七、模型评估

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

八、保存and加载模型

 保存模型
model.save('model/my_model.h5')
# 加载模型
new_model = tf.keras.models.load_model('model/my_model.h5')

九、预测

# 采用加载的模型(new_model)来看预测结果

plt.figure(figsize=(10, 5))  # 图形的宽为10高为5

for images, labels in val_ds.take(1):
    for i in range(8):
        ax = plt.subplot(2, 4, i + 1)  
        
        # 显示图片
        plt.imshow(images[i].numpy().astype("uint8"))
        
        # 需要给图片增加一个维度
        img_array = tf.expand_dims(images[i], 0) 
        
        # 使用模型预测图片中的人物
        predictions = new_model.predict(img_array)
        plt.title(class_names[np.argmax(predictions)])

        plt.axis("off")
  • 21
    点赞
  • 24
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
中文字体识别是指通过机器学习模型对输入的中文字符进行识别和分类。AlexNet是一种经典的卷积神经网络模型,它在2012年的ImageNet图像分类比赛中夺得冠军。在中文字体识别中,我们可以使用AlexNet模型来提取中文字符的特征,然后使用分类器对不同字体的字符进行分类。 具体实现步骤如下: 1. 数据集准备:收集不同字体的中文字符图片,建立一个有标签的数据集。可以使用开源的中文字体数据集或自己制作数据集。 2. 数据预处理:将图片转换为统一的大小和格式,如灰度图像、大小为224x224像素的RGB图像等。同时可以进行数据增强操作,如旋转、翻转、裁剪等。 3. 特征提取:使用AlexNet模型对预处理后的图片进行特征提取。AlexNet包含5个卷积层和3个全连接层,其中前5个卷积层提取特征,后面的全连接层进行分类。 4. 分类器设计:使用提取的特征训练分类器,常见的分类器有支持向量机(SVM)、逻辑回归、决策树等。 5. 模型训练和测试:将数据集分为训练集和测试集,使用训练集对模型进行训练,使用测试集对模型进行测试和评估,可以计算模型的准确率、精度和召回率等指标。 6. 模型优化:对模型进行优化,如调整超参数、增加或减少网络层数等。 最后,我们可以使用训练好的模型对新的中文字符图片进行识别和分类。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

NoteLoopy

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值