26_Pytorch多分类,Softmax多分类实战,利用神经网络进行分类

此文为学习博文整理出

11.21.Pytorch多分类问题
1.21.1.PyTorch:Softmax多分类实战
1.21.1.1.MNIST数据集
1.21.1.2.Softmax分类
1.21.1.3.PyTorch实战
1.21.2.利用神经网络进行分类

1.21.Pytorch多分类问题

1.21.1.PyTorch:Softmax多分类实战

多分类一种比较常用的做法是在最后一层加softmax归一化,值最大的维度所对应的位置则作为该样本对应的类。本文采用PyTorch框架,选用经典图像数据集mnist学习一波多分类。

1.21.1.1.MNIST数据集

MNIST 数据集(手写数字数据集)来自美国国家标准与技术研究所, National Institute of Standards and Technology (NIST). 训练集 (training set) 由来自 250 个不同人手写的数字构成, 其中 50% 是高中学生, 50% 来自人口普查局 (the Census Bureau) 的工作人员. 测试集(test set) 也是同样比例的手写数字数据。MNIST数据集下载地址:http://yann.lecun.com/exdb/mnist/。手写数字的MNIST数据库包括60,000个的训练集样本,以及10,000个测试集样本。
在这里插入图片描述
其中:
train-images-idx3-ubyte.gz (训练数据集图片)
train-labels-idx1-ubyte.gz (训练数据集标记类别)
t10k-images-idx3-ubyte.gz: (测试数据集)
t10k-labels-idx1-ubyte.gz(测试数据集标记类别)

在这里插入图片描述
MNIST数据集是经典图像数据集,包括10个类别(0到9)。每一张图片拉成向量表示,如下图784维向量作为第一层输入特征。
在这里插入图片描述

1.21.1.2.Softmax分类

softmax函数的本质就是将一个K 维的任意实数向量压缩(映射)成另一个K维的实数向量,其中向量中的每个元素取值都介于(0,1)之间,并且压缩后的K个值相加等于1(变成了概率分布)。在选用Softmax做多分类时,可以根据值的大小来进行多分类的任务,如取权重最大的一维。softmax介绍和公式网上很多,这里不介绍了。下面使用Pytorch定义一个多层网络(4个隐藏层,最后一层softmax概率归一化),输出层为10正好对应10类。
在这里插入图片描述

1.21.1.3.PyTorch实战
# -*- coding: UTF-8 -*-

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable

# Training settings
batch_size = 64

# MNIST Dataset
train_dataset = datasets.MNIST(root='./mnist_data/',
                               train=True,
                               transform=transforms.ToTensor(),
                               download=True)

test_dataset = datasets.MNIST(root='./mnist_data/',
                              train=False,
                              transform=transforms.ToTensor())

# Data Loader (Input Pipeline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
                                           batch_size=batch_size,
                                           shuffle=True)

test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
                                          batch_size=batch_size,
                                          shuffle=False)


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.l1 = nn.Linear(784, 520)
        self.l2 = nn.Linear(520, 320)
        self.l3 = nn.Linear(320, 240)
        self.l4 = nn.Linear(240, 120)
        self.l5 = nn.Linear(120, 10)

    def forward(self, x):
        # Flatten the data (n, 1, 28, 28) --> (n, 784)
        x = x.view(-1, 784)
        x = F.relu(self.l1(x))
        x = F.relu(self.l2(x))
        x = F.relu(self.l3(x))
        x = F.relu(self.l4(x))
        return F.log_softmax(self.l5(x), dim=1)
        #return self.l5(x)


model = Net()

optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)


def train(epoch):
    # 每次输入barch_idx个数据
    for batch_idx, (data, target) in enumerate(train_loader):
        data, target = Variable(data), Variable(target)

        optimizer.zero_grad()
        output = model(data)
        # loss
        loss = F.nll_loss(output, target)
        loss.backward()
        # update
        optimizer.step()
        if batch_idx % 200 == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset),
                100. * batch_idx / len(train_loader), loss.item()))


def test():
    test_loss = 0
    correct = 0
    # 测试集
    for data, target in test_loader:
        data, target = Variable(data, volatile=True), Variable(target)
        output = model(data)
        # sum up batch loss
        test_loss += F.nll_loss(output, target).item()
        # get the index of the max
        pred = output.data.max(1, keepdim=True)[1]
        correct += pred.eq(target.data.view_as(pred)).cpu().sum()

    test_loss /= len(test_loader.dataset)
    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)
    ))


for epoch in range(1, 6):
    train(epoch)
    test()

输出结果:

Python 3.7.4 (default, Aug  9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] on win32
Django 3.0.6
runfile('E:/workspace/pytorch-learn/26_多分类问题/01_多分类问题.py', wdir='E:/workspace/pytorch-learn/26_多分类问题')
Train Epoch: 1 [0/60000 (0%)]	Loss: 2.299828
Train Epoch: 1 [12800/60000 (21%)]	Loss: 2.296097
Train Epoch: 1 [25600/60000 (43%)]	Loss: 2.286291
Train Epoch: 1 [38400/60000 (64%)]	Loss: 2.258982
Train Epoch: 1 [51200/60000 (85%)]	Loss: 2.001041
E:/workspace/pytorch-learn/26_多分类问题/01_多分类问题.py:81: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  data, target = Variable(data, volatile=True), Variable(target)
Test set: Average loss: 0.0223, Accuracy: 5924/10000 (59%)
Train Epoch: 2 [0/60000 (0%)]	Loss: 1.392825
Train Epoch: 2 [12800/60000 (21%)]	Loss: 0.917865
Train Epoch: 2 [25600/60000 (43%)]	Loss: 0.554404
Train Epoch: 2 [38400/60000 (64%)]	Loss: 0.556347
Train Epoch: 2 [51200/60000 (85%)]	Loss: 0.422638
Test set: Average loss: 0.0065, Accuracy: 8784/10000 (88%)
Train Epoch: 3 [0/60000 (0%)]	Loss: 0.348750
Train Epoch: 3 [12800/60000 (21%)]	Loss: 0.396100
Train Epoch: 3 [25600/60000 (43%)]	Loss: 0.404045
Train Epoch: 3 [38400/60000 (64%)]	Loss: 0.275161
Train Epoch: 3 [51200/60000 (85%)]	Loss: 0.526218
Test set: Average loss: 0.0052, Accuracy: 8978/10000 (90%)
Train Epoch: 4 [0/60000 (0%)]	Loss: 0.422416
Train Epoch: 4 [12800/60000 (21%)]	Loss: 0.269215
Train Epoch: 4 [25600/60000 (43%)]	Loss: 0.182410
Train Epoch: 4 [38400/60000 (64%)]	Loss: 0.150055
Train Epoch: 4 [51200/60000 (85%)]	Loss: 0.224126
Test set: Average loss: 0.0036, Accuracy: 9333/10000 (93%)
Train Epoch: 5 [0/60000 (0%)]	Loss: 0.149385
Train Epoch: 5 [12800/60000 (21%)]	Loss: 0.271054
Train Epoch: 5 [25600/60000 (43%)]	Loss: 0.340432
Train Epoch: 5 [38400/60000 (64%)]	Loss: 0.311231
Train Epoch: 5 [51200/60000 (85%)]	Loss: 0.127134
Test set: Average loss: 0.0026, Accuracy: 9511/10000 (95%)

1.21.2.利用神经网络进行分类

本文就是用最简单的途径来看看神经网络是怎么进行事物的分类。具体的实现如下:

# -*- coding: UTF-8 -*-

import torch
import torch.nn.functional as F
import matplotlib as plt
from torch.autograd import Variable

#创建一些假数据
n_data = torch.ones(100, 2)         # 数据的基本形态
x0 = torch.normal(2*n_data, 1)      # 类型0 x data (tensor), shape=(100, 2)
y0 = torch.zeros(100)               # 类型0 y data (tensor), shape=(100, 1)
x1 = torch.normal(-2*n_data, 1)     # 类型1 x data (tensor), shape=(100, 1)
y1 = torch.ones(100)                # 类型1 y data (tensor), shape=(100, 1)

# 注意 x, y 数据的数据形式是一定要像下面一样 (torch.cat 是在合并数据)
x = torch.cat((x0, x1), 0).type(torch.FloatTensor)    # FloatTensor = 32-bit floating
y = torch.cat((y0, y1), ).type(torch.LongTensor)      # LongTensor = 64-bit integer

# torch只能在Variable上训练,所以把它们变成Variable
x, y = Variable(x), Variable(y)


# 建立一个神经网络我们可以直接运用torch中的体系,先定义所有的层属性(init()),然后再一层层搭建(forward(x))
# 层与层的关系链接。这个和我们在前面的regression的时候的神经网络基本没差。建立关系的时候,我们会用到激活函数。


# 建立神经网络
class Net(torch.nn.Module):    # 继承torch的Module
    def __init__(self, n_feature, n_hidden, n_output):
        super(Net, self).__init__()                              #继承__init__功能
        self.hidden = torch.nn.Linear(n_feature, n_hidden)     # 隐藏层线性输出
        self.out = torch.nn.Linear(n_hidden, n_output)         # 输出层线性输出

    def forward(self, x):
        # 正向传播输入值,神经网络分析输出值
        x = F.relu(self.hidden(x))                               # 激活函数(隐藏层的线性值)
        x = self.out(x)                                           # 输出值, 但是这个不是预测值, 预测值还需要再另外计算
        return x


net = Net(n_feature=2, n_hidden=10, n_output=2)                # 几个类别就几个 output
print(net)
# 训练网络
# optimizer是训练的工具
optimizer = torch.optim.SGD(net.parameters(), lr=0.02)         #传入net的所有参数,学习率
# 算误差的时候, 注意真实值!不是! one-hot 形式的, 而是1D Tensor, (batch,)
# 但是预测值是2D tensor (batch, n_classes)
loss_func = torch.nn.CrossEntropyLoss()

for t in range(200):
    out = net(x)         # 喂给 net 训练数据 x, 输出分析值

    loss = loss_func(out, y)     # 计算两者的误差
    print(loss)

    optimizer.zero_grad()        # 清空上一步的残余更新参数值
    loss.backward()              # 误差反向传播, 计算参数更新值
    optimizer.step()             # 将参数更新值施加到 net 的 parameters上

输出结果:

runfile('E:/workspace/pytorch-learn/26_多分类问题/03_pytorch之区分类型.py', wdir='E:/workspace/pytorch-learn/26_多分类问题')
Net(
  (hidden): Linear(in_features=2, out_features=10, bias=True)
  (out): Linear(in_features=10, out_features=2, bias=True)
)
tensor(1.2678, grad_fn=<NllLossBackward>)
tensor(1.1538, grad_fn=<NllLossBackward>)
tensor(1.0548, grad_fn=<NllLossBackward>)
tensor(0.9684, grad_fn=<NllLossBackward>)
tensor(0.8926, grad_fn=<NllLossBackward>)
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
tensor(0.2143, grad_fn=<NllLossBackward>)
tensor(0.2082, grad_fn=<NllLossBackward>)
tensor(0.2023, grad_fn=<NllLossBackward>)
tensor(0.1968, grad_fn=<NllLossBackward>)

再如案例

import  torch
import  torch.nn as nn
import  torch.nn.functional as F
import  torch.optim as optim
from    torchvision import datasets, transforms


batch_size=200
learning_rate=0.01
epochs=10

train_loader = torch.utils.data.DataLoader(
    datasets.MNIST('../data', train=True, download=True,
                   transform=transforms.Compose([
                       transforms.ToTensor(),
                       transforms.Normalize((0.1307,), (0.3081,))
                   ])),
    batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
    datasets.MNIST('../data', train=False, transform=transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize((0.1307,), (0.3081,))
    ])),
    batch_size=batch_size, shuffle=True)



w1, b1 = torch.randn(200, 784, requires_grad=True),\
         torch.zeros(200, requires_grad=True)
w2, b2 = torch.randn(200, 200, requires_grad=True),\
         torch.zeros(200, requires_grad=True)
w3, b3 = torch.randn(10, 200, requires_grad=True),\
         torch.zeros(10, requires_grad=True)

torch.nn.init.kaiming_normal_(w1)
torch.nn.init.kaiming_normal_(w2)
torch.nn.init.kaiming_normal_(w3)


def forward(x):
    x = x@w1.t() + b1
    x = F.relu(x)
    x = x@w2.t() + b2
    x = F.relu(x)
    x = x@w3.t() + b3
    x = F.relu(x)
    return x



optimizer = optim.SGD([w1, b1, w2, b2, w3, b3], lr=learning_rate)
criteon = nn.CrossEntropyLoss()

for epoch in range(epochs):

    for batch_idx, (data, target) in enumerate(train_loader):
        data = data.view(-1, 28*28)

        logits = forward(data)
        loss = criteon(logits, target)

        optimizer.zero_grad()
        loss.backward()
        # print(w1.grad.norm(), w2.grad.norm())
        optimizer.step()

        if batch_idx % 100 == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset),
                       100. * batch_idx / len(train_loader), loss.item()))


    test_loss = 0
    correct = 0
    for data, target in test_loader:
        data = data.view(-1, 28 * 28)
        logits = forward(data)
        test_loss += criteon(logits, target).item()

        pred = logits.data.max(1)[1]
        correct += pred.eq(target.data).sum()

    test_loss /= len(test_loader.dataset)
    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))
  • 6
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

涂作权的博客

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值