Pytorch神经网络

前言

  在《Numpy神经网络》中我们使用numpy实现神经网络。本章将使用Pytorch搭建模型,数据还是使用的《Numpy神经网络》中的数据。

  导入相应的库。

import torch
import torch.nn as nn
from torchsummary import summary
from sklearn.datasets import make_moons
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')#设备选择,根据你电脑的配置决定

一、数据准备

X, y = make_moons(n_samples=1000, noise=0.3)
plt.scatter(X[:, 0], X[:, 1], c=y)
plt.xlabel("X0")
plt.ylabel("X1")
plt.show()

在这里插入图片描述

x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)#拆分数据集,训练集:测试集=9:1
x_train, x_test, y_train, y_test = torch.FloatTensor(x_train), torch.FloatTensor(x_test), torch.from_numpy(y_train), torch.from_numpy(y_test)#把ndarray转化为tensor

二、模型定义

class NN(nn.Module):
    def __init__(self, num_classes):
        super(NN, self).__init__()
        self.fc1 = nn.Linear(2, 25)
        self.fc2 = nn.Linear(25, 50)
        self.fc3 = nn.Linear(50, 50)
        self.fc4 = nn.Linear(50, 25)
        self.fc5 = nn.Linear(25, 2)
        self.relu = nn.ReLU()#relu激活函数
        self.sigmoid = nn.Sigmoid()#sigmoid激活函数
    
    def forward(self, x):
        x = self.fc1(x)#(900, 25)
        x = self.relu(x)#激活
        x = self.fc2(x)#(900, 50)
        x = self.relu(x)
        x = self.fc3(x)#(900, 50)
        x = self.relu(x)
        x = self.fc4(x)#(900, 25)
        x = self.relu(x)
        x = self.fc5(x)#(900, 2)
        x = self.sigmoid(x)
        return x
num_classes = 2
model = NN(num_classes)#数据是2个类别
summary(model, input_size=(1, 2))#查看模型的结构,和numpy神经网络中的模型保持一致
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Linear-1                [-1, 1, 25]              75
              ReLU-2                [-1, 1, 25]               0
            Linear-3                [-1, 1, 50]           1,300
              ReLU-4                [-1, 1, 50]               0
            Linear-5                [-1, 1, 50]           2,550
              ReLU-6                [-1, 1, 50]               0
            Linear-7                [-1, 1, 25]           1,275
              ReLU-8                [-1, 1, 25]               0
            Linear-9                 [-1, 1, 2]              52
          Sigmoid-10                 [-1, 1, 2]               0
================================================================
Total params: 5,252
Trainable params: 5,252
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.00
Params size (MB): 0.02
Estimated Total Size (MB): 0.02
----------------------------------------------------------------
model(x_train)
tensor([[0.4574, 0.4877],
        [0.4583, 0.4877],
        [0.4595, 0.4878],
        ...,
        [0.4591, 0.4874],
        [0.4580, 0.4889],
        [0.4575, 0.4876]], grad_fn=<SigmoidBackward0>)
learning_rate = 0.01#学习率
epochs = 10000#轮次

三、损失函数

cost = nn.CrossEntropyLoss()#交叉熵损失函数

四、模型优化器

optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)#定义优化器

五、模型训练

def train(epochs):
    for i in range(epochs):
        y_hat = model(x_train).to(device)#把数据传入模型,前向传播
        loss = cost(y_hat, y_train)#计算损失
        optimizer.zero_grad() #梯度清零  
        loss.backward()  #反向传播       
        optimizer.step() #梯度更新
        if i % 1000 == 0:#每隔1000打印损失
            print("Epoch[{}/{}]\tLoss[{:.2f}]".format(i, epochs, loss.item()))
train(10000)
Epoch[0/10000]	Loss[0.69]
Epoch[1000/10000]	Loss[0.37]
Epoch[2000/10000]	Loss[0.37]
Epoch[3000/10000]	Loss[0.37]
Epoch[4000/10000]	Loss[0.37]
Epoch[5000/10000]	Loss[0.37]
Epoch[6000/10000]	Loss[0.37]
Epoch[7000/10000]	Loss[0.37]
Epoch[8000/10000]	Loss[0.37]
Epoch[9000/10000]	Loss[0.37]

六、模型测试

def test():
    with torch.no_grad():
        total = x_test.size(0)
        y_hat = model(x_test)
        _, y_pred = torch.max(y_hat, 1)
        correct = (y_pred == y_test).sum()
        print('The accuracy of this test dataset is {}%.'.format(correct / total * 100))
test()
The accuracy of this test dataset is 96.0%.

总结

  测试结果是96.0%,比使用numpy搭建的模型94.0%高了2%。成熟的深度学习框架还是比我们自己写的要更简洁,结果也比较好。在本章中使用了优化器,也许是优化器使得结果更好,那我们就在下一章探究优化器。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值