深度学习第一天

>- 本文为[🔗365天深度学习训练营](https://mp.weixin.qq.com/s/k-vYaC8l7uxX51WoypLkTw) 中的学习记录博客
>- 参考文章地址: [🔗深度学习100例-卷积神经网络(CNN)天气识别 | 第5天](https://mtyjkh.blog.csdn.net/article/details/117186183)

配置环境
1.先用conda 创建一个新环境

conda create -n sdxx python=3.6.5
conda activate sdxx

2.安装所需要的软件

conda install -c conda-forge jupyter-offlinenotebook
conda install -c conda-forge tensorflow##没装上,换PYPI试一试
pip install tensorflow
 pip install matplotlib
 pip3 install jupyter
pip list

然后打开jupyter notebook

jupyter notebook

页面如下:
在这里插入图片描述
点击new,创建一个python3的新的页面

3.导入数据

import matplotlib.pyplot as plt
import os,PIL

# 设置随机种子尽可能使结果可以重现
import numpy as np
np.random.seed(1)

# 设置随机种子尽可能使结果可以重现
import tensorflow as tf
tf.random.set_seed(1)

from tensorflow import keras
from tensorflow.keras import layers,models

import pathlib

运行情况大概是这样
在这里插入图片描述

data_dir = "/scratch/2022-08-06/med-zhouh/sdxx/5day/weather_photos/"

data_dir = pathlib.Path(data_dir)

3.查看数据
数据集一共分为cloudy、rain、shine、sunrise四类,分别存放于weather_photos文件夹中以各自名字命名的子文件夹中。

image_count = len(list(data_dir.glob('*/*.jpg')))

print("",image_count)
roses = list(data_dir.glob('sunrise/*.jpg'))
PIL.Image.open(str(roses[0]))

输出结果是这样的
在这里插入图片描述

继续第二部分的学习
1.加载数据

使用image_dataset_from_directory方法将磁盘中的数据加载到tf.data.Dataset中

batch_size = 32
img_height = 180
img_width = 180
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)

“”"
在这里插入图片描述

val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)

得到结果如下:

Found 1125 files belonging to 4 classes.
Using 225 files for validation.

我们可以通过class_names输出数据集的标签。标签将按字母顺序对应于目录名称。

class_names = train_ds.class_names
print(class_names)

得到结果:

['cloudy', 'rain', 'shine', 'sunrise']

2.可视化数据

plt.figure(figsize=(20, 10))

for images, labels in train_ds.take(1):
    for i in range(20):
        ax = plt.subplot(5, 10, i + 1)

        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[labels[i]])
        
        plt.axis("off")

在这里插入图片描述
3.再次检查数据

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break

得到结果

(32, 180, 180, 3)
(32,)

Image_batch是形状的张量(32,180,180,3)。这是一批形状180x180x3的32张图片(最后一维指的是彩色通道RGB)。
Label_batch是形状(32,)的张量,这些标签对应32张图片。

  1. 配置数据集
    在这里插入图片描述
    运行
AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

第三部分,构建CNN网络
关于卷积核的计算不懂的可以参考文章:https://blog.csdn.net/qq_38251616/article/details/114278995

layers.Dropout(0.4) 作用是防止过拟合,提高模型的泛化能力。
在上一篇文章花朵识别中,训练准确率与验证准确率相差巨大就是由于模型过拟合导致的

num_classes = 4
model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3  
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.3),  
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(num_classes)               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构

得到结果如下:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
rescaling_1 (Rescaling)      (None, 180, 180, 3)       0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 178, 178, 16)      448       
_________________________________________________________________
average_pooling2d_2 (Average (None, 89, 89, 16)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 87, 87, 32)        4640      
_________________________________________________________________
average_pooling2d_3 (Average (None, 43, 43, 32)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 41, 41, 64)        18496     
_________________________________________________________________
dropout_1 (Dropout)          (None, 41, 41, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 107584)            0         
_________________________________________________________________
dense_1 (Dense)              (None, 128)               13770880  
_________________________________________________________________
dense_2 (Dense)              (None, 4)                 516       
=================================================================
Total params: 13,794,980
Trainable params: 13,794,980
Non-trainable params: 0

第四部分:编译
在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:

损失函数(loss):用于衡量模型在训练期间的准确率。
优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
指标(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。

opt = tf.keras.optimizers.Adam(learning_rate=0.001)

model.compile(optimizer=opt,
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

五,训练模型

epochs = 10

history = model.fit(
  train_ds,
  validation_data=val_ds,
  epochs=epochs
)

得到结果如下:

Epoch 1/10
29/29 [==============================] - 11s 234ms/step - loss: 1.0135 - accuracy: 0.6278 - val_loss: 0.6895 - val_accuracy: 0.7244
Epoch 2/10
29/29 [==============================] - 6s 217ms/step - loss: 0.5197 - accuracy: 0.8300 - val_loss: 0.5395 - val_accuracy: 0.8000
Epoch 3/10
29/29 [==============================] - 6s 196ms/step - loss: 0.3535 - accuracy: 0.8700 - val_loss: 0.7650 - val_accuracy: 0.7378
Epoch 4/10
29/29 [==============================] - 6s 203ms/step - loss: 0.2933 - accuracy: 0.9000 - val_loss: 0.4826 - val_accuracy: 0.8133
Epoch 5/10
29/29 [==============================] - 6s 203ms/step - loss: 0.1891 - accuracy: 0.9311 - val_loss: 0.3608 - val_accuracy: 0.8711
Epoch 6/10
29/29 [==============================] - 6s 202ms/step - loss: 0.1985 - accuracy: 0.9189 - val_loss: 0.4878 - val_accuracy: 0.8311
Epoch 7/10
29/29 [==============================] - 6s 197ms/step - loss: 0.1221 - accuracy: 0.9556 - val_loss: 0.3838 - val_accuracy: 0.8800
Epoch 8/10
29/29 [==============================] - 6s 198ms/step - loss: 0.1085 - accuracy: 0.9589 - val_loss: 0.5011 - val_accuracy: 0.8444
Epoch 9/10
29/29 [==============================] - 6s 202ms/step - loss: 0.0704 - accuracy: 0.9789 - val_loss: 0.3517 - val_accuracy: 0.8978
Epoch 10/10
29/29 [==============================] - 6s 204ms/step - loss: 0.1002 - accuracy: 0.9700 - val_loss: 0.4030 - val_accuracy: 0.8800

六,模型评估

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

在这里插入图片描述
CSDN21天学习挑战赛
https://marketing.csdn.net/p/bdabfb52c5d56532133df2adc1a728fd

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值