pytorch构建网络的一个简单分类示例

又是一次失败的尝试。记录下,一个非常简单的分类模型。使用pytorch构建的网络结构,5分类。猜测可能是数据集的问题导致效果不好。

主要看点:
  • GPU的简单调用
  • 自己数据集的导入与分割使用,之前一直都是别人的数据,自己没有制作过
        # encoding:utf-8
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.autograd import Variable
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"

#导入数据
train_data = pd.read_csv('train_data.csv')
test_data = pd.read_csv('test_data.csv')

#加速
torch.cuda.set_device(0)  # 设置当前设备
cudnn.benchmark = True  # 加速计算
cudnn.enabled = True  # cuDNN是一个GPU加速深层神经网络原语库,开启cudnn

#转换数据类型为torch.tensor
train_data = torch.from_numpy(train_data.values)
test_data = torch.from_numpy(test_data.values)
print(train_data.size())
print(test_data.size())

#使用随机数据测试模型是否正常跑得通
train_data = torch.rand(80374,64)
test_data = torch.rand(34447,64)

train_loader = DataLoader(dataset=train_data, batch_size=64, shuffle=True, num_workers = 4)
test_loader = DataLoader(dataset=test_data, batch_size=64, shuffle=False, num_workers = 4)

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.ClassifierLayer = nn.Sequential(
            nn.Dropout(0.7),
            nn.Linear(62, 256),
            nn.BatchNorm1d(256),
            nn.LeakyReLU(0.2),
            nn.Dropout(0.4),  # 0.25
            nn.Linear(256, 5),
        )
        # self.fc1 = nn.Linear(62, 5)
        # self.fc2 = nn.Linear(5, 1)
        # self.fc3 = nn.Linear(1, 5)

    def forward(self, x):
        x = self.ClassifierLayer(x)
        # x = F.relu(self.fc1(x))
        # x = F.relu(self.fc2(x))
        # x = self.fc3(x)
        return x
model = Net()
model = model.cuda()
# model = nn.DataParallel(net, [0])

#设置优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
print('开始训练模型!!')
correct = 0
total = 0
for epoch in range(5):  # loop over the dataset multiple times

    running_loss = 0.0
    model.train()
    for i, data in enumerate(train_loader, 0):
        # get the inputs
        
        inputs, labels = data[:,1:-1],data[:,-1]
        labels = labels - 1
        inputs = torch.as_tensor(inputs, dtype=torch.float32)
        inputs = Variable(inputs.contiguous()).cuda()
        labels = labels.long()
        labels = Variable(labels.contiguous()).cuda(async=True)
        # print(data.size(),inputs.size(),labels.size())
        # exit()

        # zero the parameter gradients
        optimizer.zero_grad()
        # forward + backward + optimize
        
        outputs = model(inputs)
        _, predicted = torch.max(outputs.data, 1)
        correct += (predicted == labels).sum().item()
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        total += labels.size(0)

        # print statistics
        running_loss += loss.item()
        if i % 200 == 199:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.4f' % (epoch + 1, i + 1, running_loss / 200))
            running_loss = 0.0
    print('Accuracy of the network on the train images: %d %%' % (100 * correct / total))
    correct = 0
    total = 0
    
    
print('Finished Training')



correct = 0
total = 0
with torch.no_grad():
    model.eval()
    for i, data in enumerate(test_loader, 0):
        inputs, labels = data[:, 1:-1], data[:, -1]
        labels = labels - 1
        inputs = torch.as_tensor(inputs, dtype=torch.float32)
        inputs = Variable(inputs.contiguous()).cuda()
        labels = labels.long()
        labels = Variable(labels.contiguous()).cuda(async=True)
        # print(inputs)
        outputs = model(inputs)
        # print(outputs)
        _, predicted = torch.max(outputs.data, 1)
        # print(predicted)
        # exit()
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print('Accuracy of the network on the test images: %d %%' % (
    100 * correct / total))
      

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
当然可以!以下是使用PyTorch构建一个分类网络示例代码: ```python import torch import torch.nn as nn import torch.optim as optim # 定义网络架构 class Net(nn.Module): def __init__(self, input_size, hidden_size, num_classes): super(Net, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x): out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out # 定义训练数据和标签 input_size = 10 hidden_size = 20 num_classes = 3 input_data = torch.randn(100, input_size) labels = torch.randint(0, num_classes, (100,)) # 初始化网络和优化器 model = Net(input_size, hidden_size, num_classes) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.01) # 训练网络 num_epochs = 10 for epoch in range(num_epochs): # 前向传播 outputs = model(input_data) loss = criterion(outputs, labels) # 反向传播和优化 optimizer.zero_grad() loss.backward() optimizer.step() # 每训练一轮输出一次损失 print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}') ``` 这段代码创建了一个具有一个隐藏层的多分类神经网络。输入数据的维度为`input_size`,隐藏层的大小为`hidden_size`,类别数为`num_classes`。训练数据和标签存储在`input_data`和`labels`中。在训练过程中,使用交叉熵损失函数和随机梯度下降(SGD)优化器进行模型训练。每个训练轮次都会输出损失值。 请根据自己的数据和需求修改输入数据的维度、隐藏层大小、类别数和其他超参数,并适当调整优化器和损失函数等设置。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

鹏RPZ

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值