深度学习实战第二周--彩色图片的识别

>- **🍨 本文为[🔗365天深度学习训练营](https://mp.weixin.qq.com/s/xLjALoOD8HPZcH563En8bQ) 中的学习记录博客**
>- ** 参考文章:365天深度学习训练营-第5周:运动鞋品牌识别(训练营内部成员可读)**
>- **🍖 原作者:[K同学啊](https://mp.weixin.qq.com/s/xLjALoOD8HPZcH563En8bQ)**

本周识别彩色图片,采取卷积神经网络搭建网络,数据集数量适中,但评价指标的数值值得提高

1.设置设备,本文采用自己电脑自带的GPU1080Ti

#设置GPU
import tensorflow as tf
gpus=tf.config.list_physical_devices("GPU")
if gpus:
    gpus0=gpus[0]#如果有多个GPU,则仅仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0,True)#设置GPUS显存按需使用
    tf.config.set_visible_devices([gpus0],"GPUS")

2.数据集导入

from tensorflow.keras import datasets,layers,models
import matplotlib.pyplot as plt
(train_images,train_labels),(test_images,test_labels)=datasets.cifar10.load_data()

3.数据标准化

由于像素点最大值为155,最小值为0,这里直接让他们除以255.0即可完成数据标准化

#像素标准化
train_images,test_images=train_images/255.0,test_images/255.0
train_images.shape,test_images.shape,train_labels.shape,test_labels.shape

4.数据可视化

class_name=['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']
plt.figure(figsize=(20,20))
for i in range(20):
    plt.subplot(5,10,i+1)
    plt.xticks([])
    plt.yticks([])
    plt.grid(False)
    plt.imshow(train_images[i],cmap=plt.cm.binary)
    plt.xlabel(class_name[train_labels[i][0]])

 5.模型搭建

#搭建模型
model=models.Sequential([
    layers.Conv2D(32,(3,3),activation='relu',input_shape=(32,32,3)),
    layers.MaxPool2D((2,2)),
    layers.Conv2D(64,(3,3),activation='relu'),
    layers.MaxPooling2D((2,2)),
    layers.Conv2D(64,(3,3),activation='relu'),
    layers.Flatten(),
    layers.Dense(64,activation='relu'),
    layers.Dense(10)
]
)

6.模型编译

model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['accuracy'])

7.模型训练

history=model.fit(train_images,train_labels,epochs=20,validation_data=(test_images,test_labels))

Epoch 1/20
1563/1563 [==============================] - 29s 17ms/step - loss: 1.5689 - accuracy: 0.4253 - val_loss: 1.2995 - val_accuracy: 0.5338
Epoch 2/20
1563/1563 [==============================] - 28s 18ms/step - loss: 1.2116 - accuracy: 0.5707 - val_loss: 1.1985 - val_accuracy: 0.5798
Epoch 3/20
1563/1563 [==============================] - 27s 17ms/step - loss: 1.0499 - accuracy: 0.6286 - val_loss: 1.0068 - val_accuracy: 0.6500
Epoch 4/20
1563/1563 [==============================] - 26s 17ms/step - loss: 0.9501 - accuracy: 0.6669 - val_loss: 0.9589 - val_accuracy: 0.6664
Epoch 5/20
1563/1563 [==============================] - 27s 17ms/step - loss: 0.8733 - accuracy: 0.6961 - val_loss: 0.9424 - val_accuracy: 0.6728
Epoch 6/20
1563/1563 [==============================] - 26s 17ms/step - loss: 0.8169 - accuracy: 0.7130 - val_loss: 0.9840 - val_accuracy: 0.6608
Epoch 7/20
1563/1563 [==============================] - 27s 18ms/step - loss: 0.7566 - accuracy: 0.7348 - val_loss: 0.8962 - val_accuracy: 0.6925
Epoch 8/20
1563/1563 [==============================] - 28s 18ms/step - loss: 0.7127 - accuracy: 0.7510 - val_loss: 0.9304 - val_accuracy: 0.6812
Epoch 9/20
1563/1563 [==============================] - 27s 17ms/step - loss: 0.6721 - accuracy: 0.7640 - val_loss: 0.8733 - val_accuracy: 0.7022
Epoch 10/20
1563/1563 [==============================] - 27s 17ms/step - loss: 0.6339 - accuracy: 0.7771 - val_loss: 0.9032 - val_accuracy: 0.6993
Epoch 11/20
1563/1563 [==============================] - 27s 17ms/step - loss: 0.6008 - accuracy: 0.7880 - val_loss: 0.9038 - val_accuracy: 0.7067
Epoch 12/20
1563/1563 [==============================] - 27s 17ms/step - loss: 0.5655 - accuracy: 0.7982 - val_loss: 0.8896 - val_accuracy: 0.7067
Epoch 13/20
1563/1563 [==============================] - 27s 17ms/step - loss: 0.5337 - accuracy: 0.8115 - val_loss: 0.9292 - val_accuracy: 0.6994
Epoch 14/20
1563/1563 [==============================] - 26s 17ms/step - loss: 0.5067 - accuracy: 0.8173 - val_loss: 0.9405 - val_accuracy: 0.7077
Epoch 15/20
1563/1563 [==============================] - 28s 18ms/step - loss: 0.4762 - accuracy: 0.8319 - val_loss: 0.9913 - val_accuracy: 0.6900
Epoch 16/20
1563/1563 [==============================] - 27s 17ms/step - loss: 0.4546 - accuracy: 0.8382 - val_loss: 1.0190 - val_accuracy: 0.7094
Epoch 17/20
1563/1563 [==============================] - 26s 17ms/step - loss: 0.4279 - accuracy: 0.8476 - val_loss: 1.0170 - val_accuracy: 0.6988
Epoch 18/20
1563/1563 [==============================] - 27s 17ms/step - loss: 0.4046 - accuracy: 0.8553 - val_loss: 1.0816 - val_accuracy: 0.6956
Epoch 19/20
1563/1563 [==============================] - 26s 17ms/step - loss: 0.3849 - accuracy: 0.8621 - val_loss: 1.1430 - val_accuracy: 0.6888
Epoch 20/20
1563/1563 [==============================] - 25s 16ms/step - loss: 0.3599 - accuracy: 0.8723 - val_loss: 1.1654 - val_accuracy: 0.6961

 8.可视化测试集

plt.imshow(test_images[1])

 模型预测:

import numpy as np
pre=model.predict(test_images)
print(class_name[np.argmax(pre[1])])

模型评估:

#模型评估
import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'],label='accuracy')
plt.plot(history.history['val_accuracy'],label='val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5,1])
plt.legend(loc="lower right")
plt.show()
test_loss,test_acc=model.evaluate(test_images,test_labels,verbose=2)
print(test_acc)

9.总结

 本次彩色图片的识别之后,我们可以看到搭建的CNN网络过于简单,我建议大家可以搭建更加复杂的或者使用更加复杂得网络进行图片识别,本实验准确率不是很高,有待提升,大家也可以使用更加复杂得或者其他网络进行实验发现idea

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

南京比高IT

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值