day02--深度学习之彩色图片分类


'''
>- **🍨 本文为[🔗365天深度学习训练营](https://mp.weixin.qq.com/s/xLjALoOD8HPZcH563En8bQ) 中的学习记录博客**
>- ** 参考文章地址: [🔗深度学习100例-卷积神经网络(CNN)彩色图片分类 | 第2天](https://mtyjkh.blog.csdn.net/article/details/116978213)**
>- **🍖 作者:[K同学啊](https://mp.weixin.qq.com/s/k-vYaC8l7uxX51WoypLkTw)**

'''



一、设置GPU

import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0],"GPU")

二、导入数据
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt

(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()



三、归一化
# 将像素的值标准化至0到1的区间内。
train_images, test_images = train_images / 255.0, test_images / 255.0

train_images.shape,test_images.shape,train_labels.shape,test_labels.shape


四、可视化
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer','dog', 'frog', 'horse', 'ship', 'truck']

plt.figure(figsize=(20,10))
for i in range(20):
    plt.subplot(5,10,i+1)
    plt.xticks([])
    plt.yticks([])
    plt.grid(False)
    plt.imshow(train_images[i], cmap=plt.cm.binary)
    plt.xlabel(class_names[train_labels[i][0]])
plt.show()


五、构建CNN网络模型
model = models.Sequential([
    layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)), #卷积层1,卷积核3*3
    layers.MaxPooling2D((2, 2)),                   #池化层1,2*2采样
    layers.Conv2D(64, (3, 3), activation='relu'),  #卷积层2,卷积核3*3
    layers.MaxPooling2D((2, 2)),                   #池化层2,2*2采样
    layers.Conv2D(64, (3, 3), activation='relu'),  #卷积层3,卷积核3*3
    
    layers.Flatten(),                      #Flatten层,连接卷积层与全连接层
    layers.Dense(64, activation='relu'),   #全连接层,特征进一步提取
    layers.Dense(10)                       #输出层,输出预期结果
])

model.summary()  # 打印网络结构


六、编译
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])


七、训练模型
history = model.fit(train_images, train_labels, epochs=20, 
                    validation_data=(test_images, test_labels))


八、预测
plt.imshow(test_images[1])
plt.show()
import numpy as np

pre = model.predict(test_images)
print(class_names[np.argmax(pre[1])])

六、模型评估

import matplotlib.pyplot as plt

plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.xlim([1,10])
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
plt.show()

test_loss, test_acc = model.evaluate(test_images,  test_labels, verbose=2)
print(test_loss)
print(test_acc)

'''
本周总结:系统的学习了numpy和pandas两个常用的python包,进一步增强了对CNN网络模型的知识学习。
'''
Model: "sequential_9"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_27 (Conv2D)           (None, 30, 30, 32)        896       
_________________________________________________________________
max_pooling2d_18 (MaxPooling (None, 15, 15, 32)        0         
_________________________________________________________________
conv2d_28 (Conv2D)           (None, 13, 13, 64)        18496     
_________________________________________________________________
max_pooling2d_19 (MaxPooling (None, 6, 6, 64)          0         
_________________________________________________________________
conv2d_29 (Conv2D)           (None, 4, 4, 64)          36928     
_________________________________________________________________
flatten_9 (Flatten)          (None, 1024)              0         
_________________________________________________________________
dense_18 (Dense)             (None, 64)                65600     
_________________________________________________________________
dense_19 (Dense)             (None, 10)                650       
=================================================================
Total params: 122,570
Trainable params: 122,570
Non-trainable params: 0
_________________________________________________________________
Epoch 1/20
1563/1563 [==============================] - 4s 2ms/step - loss: 1.5139 - accuracy: 0.4447 - val_loss: 1.2358 - val_accuracy: 0.5594
Epoch 2/20
1563/1563 [==============================] - 4s 2ms/step - loss: 1.1406 - accuracy: 0.5949 - val_loss: 1.1123 - val_accuracy: 0.6068
Epoch 3/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.9933 - accuracy: 0.6517 - val_loss: 0.9888 - val_accuracy: 0.6535
Epoch 4/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.9027 - accuracy: 0.6844 - val_loss: 0.9425 - val_accuracy: 0.6750
Epoch 5/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.8337 - accuracy: 0.7084 - val_loss: 0.9264 - val_accuracy: 0.6761
Epoch 6/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.7778 - accuracy: 0.7256 - val_loss: 0.8938 - val_accuracy: 0.6907
Epoch 7/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.7331 - accuracy: 0.7438 - val_loss: 0.8992 - val_accuracy: 0.6943
Epoch 8/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.6886 - accuracy: 0.7566 - val_loss: 0.8590 - val_accuracy: 0.7110
Epoch 9/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.6501 - accuracy: 0.7709 - val_loss: 0.8859 - val_accuracy: 0.7042
Epoch 10/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.6169 - accuracy: 0.7828 - val_loss: 0.9051 - val_accuracy: 0.6995
Epoch 11/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.5813 - accuracy: 0.7955 - val_loss: 0.9340 - val_accuracy: 0.7049
Epoch 12/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.5424 - accuracy: 0.8054 - val_loss: 0.9041 - val_accuracy: 0.7076
Epoch 13/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.5181 - accuracy: 0.8174 - val_loss: 0.9279 - val_accuracy: 0.7073
Epoch 14/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.4881 - accuracy: 0.8260 - val_loss: 0.9906 - val_accuracy: 0.6906
Epoch 15/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.4626 - accuracy: 0.8351 - val_loss: 0.9769 - val_accuracy: 0.7097
Epoch 16/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.4346 - accuracy: 0.8453 - val_loss: 1.0018 - val_accuracy: 0.7071
Epoch 17/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.4097 - accuracy: 0.8523 - val_loss: 1.0650 - val_accuracy: 0.6973
Epoch 18/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.3893 - accuracy: 0.8595 - val_loss: 1.0354 - val_accuracy: 0.7049
Epoch 19/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.3684 - accuracy: 0.8674 - val_loss: 1.1426 - val_accuracy: 0.6952
Epoch 20/20
1563/1563 [==============================] - 4s 2ms/step - loss: 0.3480 - accuracy: 0.8746 - val_loss: 1.1545 - val_accuracy: 0.7086

ship

313/313 - 0s - loss: 1.1545 - accuracy: 0.7086
1.1545066833496094
0.7085999846458435


  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值