多分类问题
Softmax Classifier
二分类问题的数据集分类为0和1。MNIST(手写数字识别)数据集有10个分类,可能会有矛盾,例如有 y ˉ \bar{y} yˉ1=0.8, y ˉ \bar{y} yˉ2=0.8, y ˉ \bar{y} yˉ3=0.9,各输出之间没有抑制作用。
神经网络希望输出之间是带有竞争性的,即所有概率之和为1,且所有概率均大于0,softmax可以实现这两点。
绿色框中就是softmax的计算过程
即:
实例:
有三个样本,两组预测,Y是真实值的索引,也就是one_hot向量里面1的索引
会发现第一组与预测较为吻合因此损失值较小,而第二组与预测明显不吻合,因此损失值较大。
作业1:
读开发文档,了解CrossEntropyLoss和NLLLoss的区别
https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss
https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html#torch.nn.NLLLoss
代码实现:
import numpy as np
y = np.array([1, 0, 0])
z = np.array([0.2, 0.1, -0.1])
y_pred = np.exp(z) / np.exp(z).sum()
loss = (- y * np.log(y_pred)).sum()
print(loss)
NLLLOSS损失函数
代码实现:
y = torch.LongTensor([0])
z = torch.Tensor([[0.2, 0.1, -0.1]])
# 注意这块
softmax = nn.Softmax(dim=1)
softmax_result = softmax(z)
log_result = torch.log(softmax_result)
criterion = nn.NLLLoss()
loss = criterion(log_result, y)
print(loss)
CrossEntropyLoss
在pytorch中CrossEntropyLoss()即交叉熵损失的工作过程包括softmax,因此在神经网络的最后一层不要做激活,即非线性变换,直接交给交叉熵损失。
简单说CrossEntropyLoss就是把上面我们执行的softmax+log+NLLLoss合并起来了,可以一步执行完成:
代码实现:
y = torch.LongTensor([0])
z = torch.Tensor([[0.2, 0.1, -0.1]])
criterion = torch.nn.CrossEntropyLoss()
loss = criterion(z, y)
print(loss)
应用在MNIST数据集
实现过程:
- Prepare dataset —— 准备数据集
- Design model using Class —— 设计模型
- Construct loss and optimizer ——构造损失函数和优化器
- Training cycle + Test —— 训练 + 测试(前馈、反馈、更新)
神经网络希望输入的数值比较小,最好是在[-1, 1],遵从正态分布。因此通过transform首先把原始图像(Z28*28,每一个像素值都是0~255)转变成图像张量,其像素值变成了0~1,维度变成了1*28*28
transforms.Normalize((0.1307, ), (0.3081, ))
两个参数是均值和标准差,是对整个MNIST数据集经过计算得出的。
代码实现:
import numpy as np
import torch
import torch.nn.functional as F # relu激活函数
import torch.optim as optim # 优化器
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision import datasets
# 1.准备数据集
batch_size = 64
# 读图像的时候 将像素转化成图像张量
transforms = transforms.Compose([
transforms.ToTensor(),
# # 均值 标准差 => 切换到0-1正态分布
transforms.Normalize((0.1307,), (0.3081,))
])
train_dataset = datasets.MNIST(root='dataset/mnist',
train=True,
download=True,
transform=transforms)
train_loader = DataLoader(train_dataset,
shuffle=True,
batch_size=batch_size)
test_dataset = datasets.MNIST(root='dataset/mnist',
train=False,
download=True,
transform=transforms)
test_loader = DataLoader(test_dataset,
shuffle=False,
batch_size=batch_size)
# 2.设计模型
class Net(torch.nn.Module):
def __init__(self):
# python2.X: super(Net, self).__init__()
# python3.X:
super().__init__()
self.linear1 = torch.nn.Linear(784, 512)
self.linear2 = torch.nn.Linear(512, 256)
self.linear3 = torch.nn.Linear(256, 128)
self.linear4 = torch.nn.Linear(128, 64)
self.linear5 = torch.nn.Linear(64, 10)
def forward(self, x):
x = x.view(-1, 784)
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = F.relu(self.linear3(x))
x = F.relu(self.linear4(x))
# 最后一层不做激活,不进行非线性变换
return self.linear5(x) # 最后一层不做激活,不进行非线性变换
model = Net()
# 3.构建损失函数和优化器
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.05)
# 4.训练+测试
def train(epoch):
running_loss = 0.0
for batch_idx, (inputs, target) in enumerate(train_loader, 0):
optimizer.zero_grad()
# 前馈+反馈+更新
outputs = model(inputs)
loss = criterion(outputs, target)
loss.backward()
optimizer.step()
running_loss += loss.item()
# 937次
if batch_idx % 300 == 299:
print('[%d, %5d] loss:%.3f' %(epoch+1, batch_idx+1, running_loss/300))
running_loss = 0.0
def test():
correct = 0
total = 0
with torch.no_grad():
for images, labels in test_loader:
outputs = model(images)
# 沿着第一维度找最大值的下标
# _ 是占位符,表示有值,但是用不着
_, predicted = torch.max(outputs.data, dim=1)
total += labels.size(0) # (N, 1)取第0个元素N
correct += (predicted == labels).sum().item()
print('Accuracy on test set:%d %%' % (100 * correct / total))
if __name__ == '__main__':
for epoch in range(10):
train(epoch)
test()
输出后我们能看到损失不断降低,准确率高达97%,之后准确率便无法增长,这是因为图像用全连接神经网络忽略了对局部信息的利用,把所有的元素都全连接了,处理时权重不够高。
如果可以先做特征提取,再做分类训练,效果可能会好些。
人工特征提取方法:FFT傅里叶变换、小波变换
自动特征提取:CNN
作业2:
https://www.kaggle.com/competitions/otto-group-product-classification-challenge/data
代码实现:
import numpy as np
import torch
import torch.optim as optim # 优化器
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
import pandas as pd
# 数据预处理
# 定义函数将类别标签转为id表示,方便后面计算交叉熵
def labelsId(labels):
target_id = [] # 给所有target建立一个词典
target_labels = ['Class_1', 'Class_2', 'Class_3', 'Class_4', 'Class_5', 'Class_6', 'Class_7', 'Class_8', 'Class_9']
for label in labels:
target_id.append(target_labels.index(label))
return target_id
# 1.准备数据集
class OttogroupDataset(Dataset):
def __init__(self, filepath):
data = pd.read_csv(filepath)
labels = data['target']
self.len = data.shape[0]
# 处理特征和标签
self.x_data = torch.tensor(np.array(data)[:, 1:-1].astype(float))
self.y_data = labelsId(labels)
def __getitem__(self, index):
return self.x_data[index], self.y_data[index]
def __len__(self):
return self.len
train_dataset = OttogroupDataset('dataset/kaggle/otto-group-product-classification-challenge/train.csv')
train_loader = DataLoader(dataset=train_dataset,
shuffle=True,
batch_size=64,
num_workers=0)
# 2.设计模型
class Net(torch.nn.Module):
def __init__(self):
# python2.X: super(Net, self).__init__()
# python3.X:
super().__init__()
self.linear1 = torch.nn.Linear(93, 64)
self.linear2 = torch.nn.Linear(64, 32)
self.linear3 = torch.nn.Linear(32, 16)
self.linear4 = torch.nn.Linear(16, 9)
self.relu = torch.nn.ReLU()
def forward(self, x):
x = self.relu(self.linear1(x))
x = self.relu(self.linear2(x))
x = self.relu(self.linear3(x))
# 最后一层不做激活,不进行非线性变换
return self.linear4(x) # 最后一层不做激活,不进行非线性变换
# 预测函数
def predict(self, x):
with torch.no_grad():
x = self.relu(self.linear1(x))
x = self.relu(self.linear2(x))
x = self.relu(self.linear3(x))
x = self.relu(self.linear4(x))
_, predicted = torch.max(x, dim=1)
# 将预测的类别转为one-hot表示,方便保存为预测文件。
y = pd.get_dummies(predicted)
print(y.shape)
return y
model = Net()
# 3.构建损失函数和优化器
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
# 4.训练
def train(epoch):
running_loss = 0.0
for batch_idx, (inputs, target) in enumerate(train_loader, 0):
optimizer.zero_grad()
inputs = inputs.float()
# 前馈+反馈+更新
outputs = model(inputs)
loss = criterion(outputs, target)
loss.backward()
optimizer.step()
running_loss += loss.item()
if batch_idx % 300 == 299:
print('[%d, %5d] loss:%.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
running_loss = 0.0
if __name__ == '__main__':
for epoch in range(10):
train(epoch)
def predict_save():
test_data = pd.read_csv('dataset/kaggle/otto-group-product-classification-challenge/test.csv')
test_inputs = torch.tensor(np.array(test_data)[:, 1:].astype(float)) # test_data是series,要转为array;[1:]指的是第一列开始到最后,左闭右开,去掉‘id’列
out = model.predict(test_inputs.float()) # 调用预测函数,并将inputs 改为float格式
print(out.shape)
# 自定义新的标签
labels = ['Class_1', 'Class_2', 'Class_3', 'Class_4', 'Class_5', 'Class_6', 'Class_7', 'Class_8', 'Class_9']
# 添加列标签
out.columns = labels
# 插入id行
out.insert(0, 'id', test_data['id'])
output = pd.DataFrame(out)
output.to_csv('my_predict.csv', index=False)
predict_save()