【pytorch】--基于CIFAR10的一个深度学习网络的例子

这篇笔记主要记录一下如何利用torch基于CIFAR10数据设计一个简单的深度学习网络实验

1. 导入第三方库

import torch
import torchvision
from tensorboardX import SummaryWriter
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.data import DataLoader

2. 加载数据

train_data = torchvision.datasets.CIFAR10("./CIFAR10_data", train=True, download=True)
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./CIFAR10_data\cifar-10-python.tar.gz
Failed download. Trying https -> http instead. Downloading http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./CIFAR10_data\cifar-10-python.tar.gz


100.0%


Extracting ./CIFAR10_data\cifar-10-python.tar.gz to ./CIFAR10_data
# 查看数据类型----> PIL ,但是torch训练网络需要tensor类型的数据.
# 只需要在加载数据的时候经行设置即可---->transform=torchvision.transforms.ToTensor()
train_data[1]
(<PIL.Image.Image image mode=RGB size=32x32>, 9)
train_data = torchvision.datasets.CIFAR10("./CIFAR10_data", train=True, download=True, transform=torchvision.transforms.ToTensor())
Files already downloaded and verified
# 再次查看
train_data[1]
(tensor([[[0.6039, 0.4941, 0.4118,  ..., 0.3569, 0.3412, 0.3098],
          [0.5490, 0.5686, 0.4902,  ..., 0.3765, 0.3020, 0.2784],
          [0.5490, 0.5451, 0.4510,  ..., 0.3098, 0.2667, 0.2627],
          ...,
          [0.6863, 0.6118, 0.6039,  ..., 0.1647, 0.2392, 0.3647],
          [0.6471, 0.6118, 0.6235,  ..., 0.4039, 0.4824, 0.5137],
          [0.6392, 0.6196, 0.6392,  ..., 0.5608, 0.5608, 0.5608]],
 
         [[0.6941, 0.5373, 0.4078,  ..., 0.3725, 0.3529, 0.3176],
          [0.6275, 0.6000, 0.4902,  ..., 0.3882, 0.3137, 0.2863],
          [0.6078, 0.5725, 0.4510,  ..., 0.3216, 0.2745, 0.2706],
          ...,
          [0.6549, 0.6039, 0.6275,  ..., 0.1333, 0.2078, 0.3255],
          [0.6039, 0.5961, 0.6314,  ..., 0.3647, 0.4471, 0.4745],
          [0.5804, 0.5804, 0.6118,  ..., 0.5216, 0.5255, 0.5216]],
 
         [[0.7333, 0.5333, 0.3725,  ..., 0.2784, 0.2784, 0.2745],
          [0.6627, 0.6039, 0.4627,  ..., 0.3059, 0.2431, 0.2392],
          [0.6431, 0.5843, 0.4392,  ..., 0.2510, 0.2157, 0.2157],
          ...,
          [0.6510, 0.6275, 0.6667,  ..., 0.1412, 0.2235, 0.3569],
          [0.5020, 0.5098, 0.5569,  ..., 0.3765, 0.4706, 0.5137],
          [0.4706, 0.4784, 0.5216,  ..., 0.5451, 0.5569, 0.5647]]]),
 9)
test_data = torchvision.datasets.CIFAR10("./CIFAR10_data", train=False, download=True, transform=torchvision.transforms.ToTensor())
Files already downloaded and verified
# 计算训练集以及测试集的大小
train_data_len = len(train_data)
test_data_len = len(test_data)
print('训练集的长度:{}'.format(train_data_len))
print('训练集的长度:{}'.format(test_data_len))
训练集的长度:50000
训练集的长度:10000
# 加载数据
train_data_loader = DataLoader(train_data, batch_size=64)
test_data_loader = DataLoader(test_data, batch_size=64)
print(train_data_loader)
print(test_data_loader)
<torch.utils.data.dataloader.DataLoader object at 0x000002303A7E83D0>
<torch.utils.data.dataloader.DataLoader object at 0x000002303A7E8070>

3. 建神经网络

class Test(nn.Module):
    def __init__(self):
        super().__init__()
        # CIFAR10 model
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(64*4*4, 64),
            Linear(64, 10)
        )
    def forward(self, x):
        x = self.model1(x)
        return x
3.1 测试网络
test = Test()
input_x = torch.ones((64, 3, 32, 32))
out_y = test(input_x)
out_y.shape
torch.Size([64, 10])
out_y[1]
tensor([ 0.0572,  0.0505,  0.0437,  0.2060,  0.0308, -0.0981,  0.0721,  0.1023,
         0.1390,  0.0337], grad_fn=<SelectBackward0>)
3.2 设置损失函数
loss_fun = nn.CrossEntropyLoss()
3.3 设置优化函数
learning_rate = 1e-2
optimizer = torch.optim.SGD(test.parameters(), lr=learning_rate,)
3.4 设置训练网络的一些参数
## 记录训练的次数
total_train_step = 0
## 记录测试的次数
total_test_step = 0
## 记录训练的轮数
total_epoch = 10

## 设置tensorboard
writer = SummaryWriter("./YonJun_log")

for i in range(total_epoch):
    print("------第 {} 轮训练开始-------".format(i+1))
    # 加载训练数据
    for data in train_data_loader:
        imgs, target = data
        output_y = test(imgs)
        loss = loss_fun(output_y, target)
        # print(loss)

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_train_step = total_train_step + 1
        if total_train_step % 100 == 0: # 每训练100次打印一次结果(避免窗口输出太多了信息了)
            print("training:{}, Loss:{}".format(total_train_step, loss.item()))
            writer.add_scalar("train_loss", loss.item(),total_train_step)
    # 测试步骤
    total_test_loss = 0
    with torch.no_grad():
        for data in test_data_loader:
            imgs, target = data
            output_y = test(imgs)
            loss = loss_fun(output_y, target)
            total_test_loss = total_test_loss + loss.item()
    print("Loss:{}".format(total_test_loss))
    writer.add_scalar("test_loss", total_test_loss, total_test_step)
    total_test_step = total_test_step + 1

    # 保存模型
    torch.save(Test, "model_{}.pth".format(i))
writer.close()
------第 1 轮训练开始-------
training:100, Loss:0.7551549673080444
training:200, Loss:0.8748487234115601
training:300, Loss:0.9187159538269043
training:400, Loss:0.8521456122398376
training:500, Loss:0.8002258539199829
training:600, Loss:0.9816480278968811
training:700, Loss:1.0484687089920044
Loss:178.13174575567245
------第 2 轮训练开始-------
training:800, Loss:0.9281576871871948
training:900, Loss:0.8070344924926758
training:1000, Loss:1.0724843740463257
training:1100, Loss:0.9638757705688477
training:1200, Loss:0.8384309411048889
training:1300, Loss:0.804553747177124
training:1400, Loss:0.7303808927536011
training:1500, Loss:0.8349727988243103
Loss:177.53056985139847
------第 3 轮训练开始-------
training:1600, Loss:0.7728996872901917
training:1700, Loss:0.7708233594894409
training:1800, Loss:0.8114984631538391
training:1900, Loss:0.9432680010795593
training:2000, Loss:1.070975422859192
training:2100, Loss:0.6631872057914734
training:2200, Loss:0.6893828511238098
training:2300, Loss:1.023859977722168
Loss:177.24110907316208
------第 4 轮训练开始-------
training:2400, Loss:0.8516899943351746
training:2500, Loss:0.7806205153465271
training:2600, Loss:0.9023863077163696
training:2700, Loss:0.8630157709121704
training:2800, Loss:0.7512081265449524
training:2900, Loss:1.0178464651107788
training:3000, Loss:0.7599277496337891
training:3100, Loss:0.950702965259552
Loss:176.2780049443245
------第 5 轮训练开始-------
training:3200, Loss:0.6675383448600769
training:3300, Loss:0.8793559074401855
training:3400, Loss:0.8816739320755005
training:3500, Loss:0.8316246867179871
training:3600, Loss:0.8430349826812744
training:3700, Loss:0.7882995009422302
training:3800, Loss:0.9813624024391174
training:3900, Loss:0.9530025124549866
Loss:175.99834841489792
------第 6 轮训练开始-------
training:4000, Loss:0.8220319747924805
training:4100, Loss:0.8931095004081726
training:4200, Loss:1.0225799083709717
training:4300, Loss:0.8239660859107971
training:4400, Loss:0.5933955311775208
training:4500, Loss:0.9415645003318787
training:4600, Loss:0.9078661203384399
Loss:175.51610720157623
------第 7 轮训练开始-------
training:4700, Loss:0.670432984828949
training:4800, Loss:0.8908045887947083
training:4900, Loss:0.8217464685440063
training:5000, Loss:0.8287463188171387
training:5100, Loss:0.5940623879432678
training:5200, Loss:0.8417458534240723
training:5300, Loss:0.6588895916938782
training:5400, Loss:0.6623363494873047
Loss:175.0481561422348
------第 8 轮训练开始-------
training:5500, Loss:0.7641991972923279
training:5600, Loss:0.6625202894210815
training:5700, Loss:0.5682169795036316
training:5800, Loss:0.6444393992424011
training:5900, Loss:0.8577033877372742
training:6000, Loss:0.9969536066055298
training:6100, Loss:0.6678980588912964
training:6200, Loss:0.7402710318565369
Loss:174.88561528921127
------第 9 轮训练开始-------
training:6300, Loss:0.7923698425292969
training:6400, Loss:0.6133146286010742
training:6500, Loss:0.9324386715888977
training:6600, Loss:0.6652389168739319
training:6700, Loss:0.599393904209137
training:6800, Loss:0.6970618963241577
training:6900, Loss:0.6301713585853577
training:7000, Loss:0.37432461977005005
Loss:173.943276822567
------第 10 轮训练开始-------
training:7100, Loss:0.6298545598983765
training:7200, Loss:0.5420732498168945
training:7300, Loss:0.6390514373779297
training:7400, Loss:0.48693612217903137
training:7500, Loss:0.6869808435440063
training:7600, Loss:0.9039825201034546
training:7700, Loss:0.5407873392105103
training:7800, Loss:0.8126641511917114
Loss:177.13708823919296

  • 8
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
当使用Python进行CIFAR-10数据集的神经网络实验测试时,可以使用一些常见的深度学习框架,如TensorFlow或PyTorch,来帮助构建和训练神经网络模型。下面是一个示例代码,演示如何使用TensorFlow进行CIFAR-10数据集的神经网络实验测试。 首先,确保你已经安装了TensorFlow和其他必要的依赖项。然后,按照以下步骤进行操作: 1. 导入必要的库: ```python import tensorflow as tf from tensorflow.keras import datasets, layers, models ``` 2. 加载CIFAR-10数据集: ```python (train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data() ``` 3. 对数据进行预处理: ```python train_images = train_images / 255.0 test_images = test_images / 255.0 ``` 4. 构建神经网络模型: ```python model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10)) ``` 5. 编译和训练模型: ```python model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels)) ``` 6. 评估模型性能: ```python test_loss, test_acc = model.evaluate(test_images, test_labels) print('Test accuracy:', test_acc) ``` 通过这些步骤,你可以使用Python和TensorFlow构建并训练神经网络模型,然后对CIFAR-10数据集进行实验测试,并评估模型的性能。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

YoonJun

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值