损失函数loss function
loss越小越好
**
损失函数作用:
**
- 计算实际输出和目标之间的差距
- 为我们更新输出提供一些依据(反向传播)
loss的输入和输出,N表示有多少数据,batchsize的感觉
reshape函数:(1,2,3,4)表示batchsize=1,chanel=2,3行4列
L1Loss比较简单,每项求差再平均
MSELoss比较简单,方差的感觉
CrossEntropyLoss交叉熵,比较难
CrossEntropyLoss假如是三分类,loss值越小越好
两份代码,还剩下个反向传播
import torch
from torch import nn
from torch.nn import L1Loss
inputs = torch.tensor([1,2,3], dtype = torch.float32)
# dtype = torch.float32是因为Lloss只接受浮点输入
targets = torch.tensor([1,2,5], dtype = torch.float32)
inputs = torch.reshape(inputs,(1,1,1,3))
targets= torch.reshape(targets,(1,1,1,3))
loss = L1Loss()
#loss = L1Loss(reduction='sum')的话等下就输出2
result = loss(inputs,targets)
loss_mse = nn.MSELoss()
result_mse = loss_mse(inputs,targets)
print (result)
#tensor(0.6667),手算方法(1-1)/3+(2-2)/3+(3-5)/3=2/3
print (result_mse)
#方差,tensor(1.3333),手算方法(1-1)^2/3+(2-2)^2/3+(3-5)^2/3=2/3
x = torch.tensor([0.1, 0.2, 0.3])
y = torch.tensor([1])
x = torch.reshape(x, (1,3))
loss_cross = nn.CrossEntropyLoss()
result_cross = loss_cross(x,y)
print(result_cross) #tensor(1.1019)
用数据集感受CrossEntropyLoss
import torch
import torchvision
from torch.utils.data import DataLoader
from torch import nn
from torch.nn import Conv2d, MaxPool2d,Flatten, Linear,Sequential
from torch.utils.tensorboard import SummaryWriter
from torch.nn import L1Loss
dataset = torchvision.datasets.CIFAR10("./神经网络",train = False,
transform = torchvision.transforms.ToTensor(),
download = True)
dataloader = DataLoader(dataset, batch_size=1, drop_last= True)
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.model1 = Sequential(
Conv2d(3, 32 , 5, padding = 2),
MaxPool2d(2),
Conv2d(32, 32, 5,padding = 2),
MaxPool2d(2),
Conv2d(32, 64, 5, padding = 2),
MaxPool2d(2),
Flatten(),
Linear(1024, 64),
Linear(64, 10)
)
def forward(self,x):
x = self.model1(x)
return x
loss = nn.CrossEntropyLoss()
tudui = Tudui()
for data in dataloader:
imgs, targets = data
#print(imgs)
print(targets)
outputs = tudui(imgs)
print(outputs)
result_loss = loss(outputs, targets)
result_loss.backward()
print(result_loss)
反向传播backward
通过计算损失,对于每一个“卷积核”的参数进行调优。
梯度grad,每一个参数有一个梯度,根据梯度进行优化,优化方法eg梯度下降法
是对算出来的loss进行backward