一般来说,Pytorch提供自定义loss的方法,常用的有两种:
- 使用pytorch内部函数直接实现,该方法较为简单,不用人工计算梯度
- 需要Numpy实现,需要自定义反向传播的公式
使用Pytorch内部的函数实现的
- Custom loss function in PyTorch
- numpy_extensions_tutorial
- A-Collection-of-important-tasks-in-pytorch/
使用Numpy的函数实现的
下面模型来自网络模型入门
import torch
import torch.nn as nn
# https://blog.csdn.net/oBrightLamp/article/details/85137756?utm_medium=distribute.pc_relevant.none-task-blog-BlogCommendFromBaidu-5.control&depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromBaidu-5.control
from torch.autograd import Function
import torch.nn.functional as F
from torch.autograd import Variable
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 输入图像channel:1;输出channel:6;5x5卷积核
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.