Loss Caculation

for epoch in range(n_epochs):
    for step, batch in enumerate(trnloader):
        tr_loss += loss.item()
    epoch_loss = tr_loss / len(trnloader) ```


> 链接:https://www.zhihu.com/question/335724798/answer/1369014657
for epoch in range(epochs):
    loss = 0
    for batch_features, _ in train_loader:
        # reshape mini-batch data to [N, 784] matrix
        # load it to the active device
        batch_features = batch_features.view(-1, 784).to(device)
        
        # reset the gradients back to zero
        # PyTorch accumulates gradients on subsequent backward passes
        optimizer.zero_grad()
        
        # compute reconstructions
        outputs = model(batch_features)
        
        # compute training reconstruction loss
        train_loss = criterion(outputs, batch_features)
        
        # compute accumulated gradients
        train_loss.backward()
        
        # perform parameter update based on current gradients
        optimizer.step()
        
        # add the mini-batch training loss to epoch loss
        loss += train_loss.item()
    
    # compute the epoch training loss
    loss = loss / len(train_loader)
    
    # display the epoch training loss
    print("epoch : {}/{}, loss = {:.6f}".format(epoch + 1, epochs, loss))

from https://medium.com/pytorch/implementing-an-autoencoder-in-pytorch-19baa22647d1

 # statistics
            running_loss += loss.item() * inputs.size(0)
                  
            scheduler.step()
 
        epoch_loss = running_loss / dataset_sizes
         
        print(f' Loss: {epoch_loss:.4f} ')

https://androidkt.com/how-to-calculate-running-loss-using-loss-item-in-pytorch/

Default reduction in Pytorch Loss calculates the mean loss value for
the batch. If you set reduction=‘sum’, you should get the same loss.
However, if you need the loss for each batch, just disable the
reduction via reduction=‘none’ .

https://discuss.pytorch.org/t/compute-loss-for-each-batch/56673

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值