使用pytorch构建神经网络系列
第三章 第二节Neural Network
目录
1.Cross Entropy
KL Divergence 反应了p和q两个分布的相似度,在我采用one-hot encoding时,H§就等于0,因此我们求交叉熵就是要求p分布和q分布的KL Divergence ,交叉熵越小说明p分布和q分布越相似,我们pred的值和真实值越相近。
例子
Binary Classification:
在pytorch中:
cross_entropy = softmax + log + nll_loss
避免Numerical Stability,直接使用cross_entropy
2.多分类问题实例
载入数据
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
batch_size=200
learning_rate=0.01
epochs=10
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
初始化参数,这边采用kaiming的初始化方式,能获得95%的准确率,如过没初始化好会出现梯度离散现象,loss直接不更新了
torch.nn.init.kaiming_normal_(w1)
torch.nn.init.kaiming_normal_(w2)
torch.nn.init.kaiming_normal_(w3)
训练过程:
def forward(x):
x = x@w1.t() + b1
x = F.relu(x)
x = x@w2.t() + b2
x = F.relu(x)
x = x@w3.t() + b3
x = F.relu(x)
return x
optimizer = optim.SGD([w1, b1, w2, b2, w3, b3], lr=learning_rate)
criteon = nn.CrossEntropyLoss()
for epoch in range(epochs):
for batch_idx, (data, target) in enumerate(train_loader):
data = data.view(-1, 28*28)
logits = forward(data)
loss = criteon(logits, target)
optimizer.zero_grad()
loss.backward()
# print(w1.grad.norm(), w2.grad.norm())
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
test_loss = 0
correct = 0
for data, target in test_loader:
data = data.view(-1, 28 * 28)
logits = forward(data)
test_loss += criteon(logits, target).item()
pred = logits.data.max(1)[1]
correct += pred.eq(target.data).sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
3.全连接层 Fully connected
使用nn.Linear
import torch.nn as nn
使用ReLU激活函数:
新建一个神经网络层:
Step 1:
初始化
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
Step 2:
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.model = nn.Sequential(
nn.Linear(784, 200),
nn.ReLU(inplace=True),
nn.Linear(200, 200),
nn.ReLU(inplace=True),
nn.Linear(200, 10),
nn.ReLU(inplace=True),
)
Step 3:
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.model = nn.Sequential(
nn.Linear(784, 200),
nn.ReLU(inplace=True),
nn.Linear(200, 200),
nn.ReLU(inplace=True),
nn.Linear(200, 10),
nn.ReLU(inplace=True),
)
def forward(self, x):
x = self.model(x)
return x
Train
net = MLP()
optimizer = optim.SGD(net.parameters(), lr=learning_rate)
criteon = nn.CrossEntropyLoss()
for epoch in range(epochs):
for batch_idx, (data, target) in enumerate(train_loader):
data = data.view(-1, 28*28)
logits = net(data)
loss = criteon(logits, target)
optimizer.zero_grad()
loss.backward()
# print(w1.grad.norm(), w2.grad.norm())
optimizer.step()
4.激活函数与GPU加速
Tanh,Sigmoid
ReLU
Leaky ReLU
避免在x小于0的时候梯度为0
SELU
softplus
使用GPU加速:
device = torch.device('cuda:0')
net = MLP().to(device)
Test:
test_loss = 0
correct = 0
for data, target in test_loader:
data = data.view(-1, 28 * 28)
data, target = data.to(device), target.cuda()
logits = net(data)
test_loss += criteon(logits, target).item()
pred = logits.argmax(dim=1)
correct += pred.eq(target).float().sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
5. Visdom可视化
lines: single trace
lines: multi-traces
visual data
6. Cross Validation and Regularization
避免过拟合
Reduce Overfitting
Regularization
约束参数复杂度,避免过拟合
L2-regularization:
L1-regularization:
7. Momentum
加入动量,让梯度更新冲出局部最优解,更能获得全局最优解
8.Learning rate tunning
Learning rate decay
使用动态学习率
schedule调用optimizer,管理learning rate
schedule.step 函数没调用一次就会监听一次loss,在一定次数内loss没发生变化会主动将learning rate 乘上一个系数例如0.5,使得lr减小
9. Early Stopping and Dropout
避免过拟合
How to do:
Validation set to select parameters
Monitor validation performance
Stop at the highest val perf.
Dropout
Learning less to learn better
两种dropout:
torch.nn.Dropout(p=dropout_prob)
tf.nn.dropout(keep_prob)
在做validation的时候不用dropout ,把所有神经元都用上,人为的切换到evaluation,提高表现: