1. nn.L1Loss
取预测值和真实值的绝对误差的平均数
注意数据类型为:double或float
import torch.nn as nn
import torch
import numpy as np
x=np.array([1,2,4])
label=torch.from_numpy(x).double()
pre=torch.ones([3]).double()
loss=nn.L1Loss()
print(loss(label,pre)) #1.3333
2. nn.SmoothL1Loss
分段计算误差
import torch.nn as nn
import torch
import numpy as np
x=np.array([1,2,4])
label=torch.from_numpy(x).double()
pre=torch.ones([3]).double()
loss=nn.SmoothL1Loss()
print(loss(label,pre)) #1
3. nn.MSELoss
预测值和真实值之间的平方和的平均数
import torch.nn as nn
import torch
import numpy as np
x=np.array([1,2,4])
label=torch.from_numpy(x).double()
pre=torch.ones([3]).double()
loss=nn.MSELoss()
print(loss(label,pre)) #3.3333
4. nn.NLLLoss()
负对数似然损失函数 Negative Log Likelihood
import torch.nn as nn
import torch
import numpy as np
x=np.ones([2,3]) #input is of size N x C = 2 x 3
x[0][1]=3
label=torch.from_numpy(x)
pre=torch.ones([2]).long()
loss=nn.NLLLoss()
print(loss(label,pre)) #-2
注意:输入格式为N*C; label类型为long()
5. nn.CrossEntropyLoss()
交叉熵损失函数
import torch.nn as nn
import torch
import numpy as np
x=np.ones([2,3]) #input is of size N x C = 2 x 3
x[0][1]=3
label=torch.from_numpy(x)
pre=torch.ones([2]).long()
loss=nn.CrossEntropyLoss()
print(loss(label,pre)) #0.6691
手动计算:
注意:等价于nn.LogSoftmax() 和 nn.NLLLoss()的整合使用
注意: 上述代码中label和pre变量名起反了
1. nn.LogSoftmax()
import torch.nn as nn
import torch
import numpy as np
x=np.ones([2,3]) #input is of size N x C = 2 x 3
x[0][1]=3
pre=torch.from_numpy(x)
m=nn.LogSoftmax()
lable=torch.ones([2]).long()
loss=nn.NLLLoss()
print(loss(m(pre),lable))
2. nn.Softmax()
import torch.nn as nn
import torch
import numpy as np
x=np.ones([2,3]) #input is of size N x C = 2 x 3
x[0][1]=3
pre=torch.from_numpy(x)
m=nn.Softmax()
print(m(pre))