loss.backward()
optimizer.step()
上面两语句是成对使用才有效果的,第一句反向传播误差过程中会调整模型的参数权重W及偏置b,反向传播一次后,整个模型的权重才有得完全更新,所以需要紧接着进行optimizer.step()才能使得权重 和偏置参数才能进行更新
完整代码
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
print("batch_X.shape=",batch_X.shape)
print("batch_y.shape=",batch_y.shape)
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad()
# get predictions from model
y_pred = model(batch_X)
# perform backprop
loss = criterion(y_pred, batch_y)
print("---------------------loss.type=",type(loss),"y_pred.type=",type(y_pred))
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
作者:有糖吃可好
链接:https://www.zhihu.com/question/309807670/answer/580757312
来源:知乎
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
在PyTorch中,反向传播(即x.backward()
)是通过autograd
引擎来执行的, autograd
引擎工作的前提需要知道x
进行过的数学运算,只有这样autograd
才能根据不同的数学运算计算其对应的梯度。 那么问题来了,怎样保存x
进行过的数学运算呢?答案是Tensor
或者Variable
(由于PyTorch 0.4.0 将两者合并了,下文就直接用Tensor
来表示),Tensor
具有一个属性grad_fn
就是专门保存其进行过的数学运算。 总的来说,如果你要对一个变量进行反向传播,你必须保证其为Tensor
。所以只需要把你的newLoss
换成
class newLoss(nn.Module):
def __init__(self):
super(newLoss, self).__init__()
def forward(self, output, gt):
loss = torch.zeros(0)
for row_out, row_gt :
for pixel_out, pixel_gt :
loss += torch.tensor(something pixelwise, requires_grad=True)
return loss
即可。
CS/cv小白
6 人赞同了该回答
正好要写一个DiceLoss,实际上自己还有很多不太透彻的地方,把代码放这里做个记录好了。欢迎交流~
import部分
import torch
import torch.nn.functional as F
from torch.nn.modules.loss import _Loss
from torch.autograd import Function
自定义一个DiceLoss的类,注意要继承
class DiceLoss(torch.nn.Module):
def __init__(self):
super(DiceLoss, self).__init__()
def forward(self, input, target):
return -dice_coef(input, target)
def dice_coef(input, target):
smooth = 1
input_flat = input.view(-1)
target_flat = target.view(-1)
intersection = input_flat * target_flat
return (2 * intersection.sum() + smooth) / (input_flat.sum() + target_flat.sum() + smooth)
应用一下这个loss
if __name__=='__main__':
torch.set_grad_enabled(True)
x= torch.tensor([1.,1.,1.,1.],requires_grad=False)
w = torch.tensor([1.],requires_grad=True)
b = torch.tensor([1.],requires_grad=True)
target=torch.tensor([1.,0.,1.,0.],requires_grad=False)
for i in range(4):
y=w*x+b
diceloss = DiceLoss().cuda()
loss=diceloss(y,target)
optimizer = torch.optim.Adam([w,b], lr = 0.001)
loss.backward()
optimizer.step()
print(loss)
print(w,b)
运行结果如下
tensor(-0.8182, grad_fn=<NegBackward>)
tensor([1.0010], requires_grad=True) tensor([1.0010], requires_grad=True)
tensor(-0.8183, grad_fn=<NegBackward>)
tensor([1.0020], requires_grad=True) tensor([1.0020], requires_grad=True)
tensor(-0.8184, grad_fn=<NegBackward>)
tensor([1.0030], requires_grad=True) tensor([1.0030], requires_grad=True)
tensor(-0.8186, grad_fn=<NegBackward>)
tensor([1.0040], requires_grad=True) tensor([1.0040], requires_grad=True)