cross_entropy_loss交叉熵损失
其他损失函数Yolov5网络结构及其他
cross_entropy_loss交叉熵损失
信息量与事件发生的概率负相关,即事件越确定,概率越高,信息量越少,越不确定,信息量越高。
香农熵即
H
=
−
P
l
o
g
Q
H=-PlogQ
H=−PlogQ
则交叉熵:
H
(
P
,
Q
)
=
−
∑
i
=
1
n
P
i
l
o
g
Q
i
H(P,Q)=-\sum_{i=1}^{n}{P_ilogQ_i}
H(P,Q)=−∑i=1nPilogQi
用P代表真实发生的概率,Q代表预测的概率,则可得交叉熵损失
L
=
−
∑
i
=
1
C
x
i
l
o
g
y
i
L=-\sum_{i=1}^{C}{x_ilogy_i}
L=−∑i=1Cxilogyi
交叉熵损失函数训练模型示例:
import torch
import torch.nn as nn
import torch.optim as optim
# 定义模型
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.fc1 = nn.Linear(10, 5)
self.fc2 = nn.Linear(5, 2)
def forward(self, x):
x = self.fc1(x)
x = nn.functional.relu(x)
x = self.fc2(x)
return x
model = MyModel()
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
# 训练循环
for epoch in range(num_epochs):
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))