pytorch实践(10月组队学习task4)

该博客展示了如何使用PyTorch构建和训练一个卷积神经网络(CNN)模型,处理FashionMNIST数据集。通过定义超参数、数据加载、模型结构、损失函数和优化器,作者进行了多轮训练并记录了损失和精度,展示了一个基本的深度学习模型训练流程。
摘要由CSDN通过智能技术生成

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档


前言

继续pytorch


提示:以下是本篇文章正文内容,下面案例可供参考

一、定义超参

import os
import numpy as np
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
import torch.optim as optimizer
# 配置GPU,这里有两种方式
## 方案一:使用os.environ
#os.environ['CUDA_VISIBLE_DEVICES'] = '0'
# 方案二:使用“device”,后续对要使用GPU的变量用.to(device)即可
device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")

## 配置其他超参数,如batch_size, num_workers, learning rate, 以及总的epochs
batch_size = 256
num_workers = 0
lr = 1e-4
epochs = 20

二、读取数据

from torchvision import transforms
from torchvision import datasets
image_size = 28
data_transform = transforms.Compose([
    transforms.Resize(image_size),#改变图片大小
    transforms.ToTensor()#将数据类型变成tensor
])#一般 transforms模块用来对数据进行预处理

train_data = datasets.FashionMNIST(root='./', train=True, download=True, transform=data_transform)
test_data = datasets.FashionMNIST(root='./', train=False, download=True, transform=data_transform)
train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True, num_workers=num_workers, drop_last=True)
test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=False, num_workers=num_workers)

三、定义模型

代码如下(示例):

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv = nn.Sequential(
            nn.Conv2d(1, 32, 5),
            nn.ReLU(),
            nn.MaxPool2d(2, stride=2),
            nn.Dropout(0.3),
            nn.Conv2d(32, 64, 5),
            nn.ReLU(),
            nn.MaxPool2d(2, stride=2),
            nn.Dropout(0.3)
        )
        self.fc = nn.Sequential(
            nn.Linear(64*4*4, 512),
            nn.ReLU(),
            nn.Linear(512, 10)
        )
        
    def forward(self, x):
        x = self.conv(x)
        x = x.view(-1, 64*4*4)
        x = self.fc(x)
        # x = nn.functional.normalize(x)
        return x

model = Net()
model = model.cuda()

四、定义损失函数和优化器-训练模型

import torch.optim as optim
criterion = nn.CrossEntropyLoss()
# criterion = nn.CrossEntropyLoss(weight=[1,1,1,1,3,1,1,1,1,1])
optimizer = optim.Adam(model.parameters(), lr=0.001)
def train(epoch):
    model.train()
    train_loss = 0
    for data, label in train_loader:
        data, label = data.cuda(), label.cuda()
        optimizer.zero_grad()
        output = model(data)
        loss = criterion(output, label)
        loss.backward()
        optimizer.step()
        train_loss += loss.item()*data.size(0)
    train_loss = train_loss/len(train_loader.dataset)
    print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch, train_loss))
def val(epoch):       
    model.eval()
    val_loss = 0
    gt_labels = []
    pred_labels = []
    with torch.no_grad():
        for data, label in test_loader:
            data, label = data.cuda(), label.cuda()
            output = model(data)
            preds = torch.argmax(output, 1)
            gt_labels.append(label.cpu().data.numpy())
            pred_labels.append(preds.cpu().data.numpy())
            loss = criterion(output, label)
            val_loss += loss.item()*data.size(0)
    val_loss = val_loss/len(test_loader.dataset)
    gt_labels, pred_labels = np.concatenate(gt_labels), np.concatenate(pred_labels)
    acc = np.sum(gt_labels==pred_labels)/len(pred_labels)
    print('Epoch: {} \tValidation Loss: {:.6f}, Accuracy: {:6f}'.format(epoch, val_loss, acc))
for epoch in range(1, epochs+1):
    train(epoch)
    val(epoch)

结果

/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Epoch: 1 	Training Loss: 0.677033
Epoch: 1 	Validation Loss: 0.495211, Accuracy: 0.818800
Epoch: 2 	Training Loss: 0.425615
Epoch: 2 	Validation Loss: 0.358788, Accuracy: 0.871400
Epoch: 3 	Training Loss: 0.362201
Epoch: 3 	Validation Loss: 0.326068, Accuracy: 0.881500
Epoch: 4 	Training Loss: 0.327386
Epoch: 4 	Validation Loss: 0.305909, Accuracy: 0.890500
Epoch: 5 	Training Loss: 0.305946
Epoch: 5 	Validation Loss: 0.285962, Accuracy: 0.897400
Epoch: 6 	Training Loss: 0.285503
Epoch: 6 	Validation Loss: 0.280432, Accuracy: 0.896500
Epoch: 7 	Training Loss: 0.274258
Epoch: 7 	Validation Loss: 0.275422, Accuracy: 0.898300
Epoch: 8 	Training Loss: 0.262215
Epoch: 8 	Validation Loss: 0.253080, Accuracy: 0.908600
Epoch: 9 	Training Loss: 0.254621
Epoch: 9 	Validation Loss: 0.257004, Accuracy: 0.905500
Epoch: 10 	Training Loss: 0.240819
Epoch: 10 	Validation Loss: 0.243566, Accuracy: 0.911500
Epoch: 11 	Training Loss: 0.234381
Epoch: 11 	Validation Loss: 0.250187, Accuracy: 0.908900
Epoch: 12 	Training Loss: 0.226367
Epoch: 12 	Validation Loss: 0.248466, Accuracy: 0.910400
Epoch: 13 	Training Loss: 0.220683
Epoch: 13 	Validation Loss: 0.237766, Accuracy: 0.912500
Epoch: 14 	Training Loss: 0.212676
Epoch: 14 	Validation Loss: 0.237252, Accuracy: 0.910600
Epoch: 15 	Training Loss: 0.204036
Epoch: 15 	Validation Loss: 0.233667, Accuracy: 0.915500
Epoch: 16 	Training Loss: 0.201117
Epoch: 16 	Validation Loss: 0.235281, Accuracy: 0.911800
Epoch: 17 	Training Loss: 0.192603
Epoch: 17 	Validation Loss: 0.224099, Accuracy: 0.917600
Epoch: 18 	Training Loss: 0.189722
Epoch: 18 	Validation Loss: 0.239020, Accuracy: 0.909800
Epoch: 19 	Training Loss: 0.186247
Epoch: 19 	Validation Loss: 0.229205, Accuracy: 0.917100
Epoch: 20 	Training Loss: 0.175355
Epoch: 20 	Validation Loss: 0.220682, Accuracy: 0.920900
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值