FocalLoss解析

1 FocalLoss

a. 关于Focal loss具体的解析可以参考https://zhuanlan.zhihu.com/p/49981234
对于二分类FocalLoss ,代码参考ptorch官方https://pytorch.org/vision/stable/generated/torchvision.ops.sigmoid_focal_loss.html?highlight=focal#torchvision.ops.sigmoid_focal_loss,这里主要从数值解析上去验证:

from torchvision.ops import sigmoid_focal_loss

input = torch.tensor([0.1,0.2])
target = torch.tensor([0,1])
weight = 0.25
gamma = 2
loss = sigmoid_focal_loss(input.float(), target.float(), weight=0.25, gamma=2, reduction='none')
print(loss)
'''
loss值 tensor([0.1539, 0.0303])
对于第一个是负样本,计算过程
pt = 1-torch.sigmoid(input[0]) 
loss_1 = -(1-weight)*(1-pt)**gamma*torch.log(pt) #值为0.1539
对于第二个是正样本,计算过程
pt = torch.sigmoid(input[1])
loss_2 = -weight*(1-pt)**gamma*torch.log(pt) #值为0.0303
'''

b. 对于多分类focal loss (multi-class focal loss), 暂时还未找到靠谱代码,基本就是说weight对所有类别都是一致的,后续在补充
b1. 这里加一个对于multi-class 的二分类 focal loss,就是将多分类转化成二分类,然后计算focal loss(retina net计算方式)

'''
假设输入 经过sigmoid之后,一共4(包括背景类,最后一类是背景类),二个框分类,所以输入大小是2*3,target是[3,2],进过onehot之后[[0,0,0],[0,0,1]]
prob = torch.tensor([[0.0247,0.0248,0.0249],[0.0247,0.0248,0.0249],[0.0247,0.0248,0.0249]])
targets = torch.tensor([[0,0,0],[0,0,1]])
gamma = 2
alpha = 0.25
ce_loss = F.binary_cross_entropy(inputs, targets, reduction="none")
p_t = prob * targets + (1 - prob) * (1 - targets)
loss = ce_loss * ((1 - p_t) ** gamma)

if alpha >= 0:
    alpha_t = alpha * targets + (1 - alpha) * (1 - targets)
    loss = alpha_t * loss

对于第一个框第一类(算是负样本)的计算过程就是
pt = 1-prob[0][0]
loss_0 = (1-alpha)*(1-pt)**gamma*(-torch.log(pt)) # 值是1.1416e-5
整体计算过程参照上面的a计算过程

c. 基于分割的focal loss,代码参考https://docs.monai.io/en/stable/_modules/monai/losses/focal_loss.html#FocalLoss,这里计算是按所有类别weight都是一样,这里主要从数值解析上去验证:

import torch
from monai.losses import FocalLoss

input = torch.tensor([[[[0.1,0.2],[0.3,0.4]],       
                       [[0.5,0.6],[0.7,0.8]],
                       [[0.9,0.1],[0.2,0.4]],]])

target = torch.tensor([[[[1,0],[0,1]]]])
weight = 0.25
gamma = 2
pt = torch.exp(input[0,0,0,0])/(1+torch.exp(input[0,0,0,0]))
pt1 = torch.exp(input[0,0,0,1])/(1+torch.exp(input[0,0,0,1]))
self = FocalLoss(reduction='none', gamma=gamma, weight=weight, to_onehot_y=True) #对于这个weight可以设置成个list长度和类别长度一致,表示每一类权重大小,参考源码解释
loss = self(input, target)
'''
对应loss值是tensor([[[[0.0513, 0.0303],
          [0.0251, 0.0818]],

         [[0.0169, 0.1081],
          [0.1231, 0.0089]],

         [[0.1568, 0.0513],
          [0.0603, 0.0818]]]])
这里是3个类别标签,大小是2*2
计算第一个类别第一个位置的loss,此为负样本
pt = 1 - torch.exp(input[0,0,0,0])/(1+torch.exp(input[0,0,0,0]))
loss1 = -weight*(1-pt)**gamma*torch.log(pt) # 值是0.0513
计算第一个类别第二个位置的loss,此为正样本
pt =  torch.exp(input[0,0,0,1])/(1+torch.exp(input[0,0,0,1]))
loss2 = -weight*(1-pt)**gamma*torch.log(pt) # 值是0.0303
'''

2 Dice Loss

关于dice loss具体的解析可以参考https://zhuanlan.zhihu.com/p/269592183,具体代码解析参考https://docs.monai.io/en/stable/_modules/monai/losses/dice.html#DiceLoss.forward

from monai.losses.dice import *  # NOQA
import torch
from monai.losses.dice import DiceLoss

input = torch.tensor([[[[0.1,0.2],[0.3,0.4]],       
                       [[0.5,0.6],[0.7,0.8]],
                       [[0.9,0.1],[0.2,0.4]],]]) # input的shape 1*3*2*2(对应batch*num_class*h*w)
target_idx = torch.tensor([[[1,0],[0,1]]]) #label的shape 1*1*2*2
target = one_hot(target_idx[:, None, ...], num_classes=C)  #这里是转化成one-hot形式
'''
target 的值
target = torch.tensor([[[0,1],[1,0]],
					   [[1,0],[0,1]],
					   [[0,0],[0,0]]])
'''

self = DiceLoss(reduction='none')
loss = self(input, target)

'''
对应的loss 结果
loss = tensor([[[[0.6667]],
         		[[0.4348]],
         		[[1.0000]]]])
如何计算
首先是有3个类别的输入,对于每个类别loss计算
整体公式就是 loss = 1-2*tp/(预测的概率和+标签的和)    (tp是指label为真对应的概率值)
loss_1 = 1-2*(0.2+0.3)/(0.1+0.2+0.3+0.4+2) = 0.6667
loss_2 = 1-2*(0.5+0.8)/(0.5+0.6+0.7+0.8+2) = 0.4348
loss_3 = 1-2*0/(0.9+0.1+0.2+0.4+0)

3 CrossEntropyLoss(CELoss)

对于CELoss,常见的多分类交叉熵loss,具体参考pytorch官网https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html,计算流程是针对input计算softmax之后,然后再计算loss,其中输入input和target的格式:
一般默认input形状是 b a t c h s i z e ∗ N batchsize*N batchsizeN, 浮点型; 对应的target形状是 N N N,长整型;当时当target的值不是整数的时候,这里会强制转化成整数,示例如下:

loss = nn.CrossEntropyLoss()
input = torch.tensor([[1,2]]).float() #输入input是个1*2形状(即1batchsize加2分类)
target = torch.tensor([1]) #target是1形状
output = loss(input, target) #值是tensor(0.3133)
#计算过程等价于如下
softmax_input = torch.softmax(input, dim=1) # 值是tensor([[0.2689, 0.7311]])
output = -torch.log(softmax_input[0][1]) # 值是tensor(0.3133)
# 当target是浮点数1.1的时候会强制转换成1,因为输入target必须是long型;当浮点数值超过2时候会报错
target1 = torch.tensor([1.1])
output1 = loss(input, target1.long()) 

对于input形状是 b a t c h s i z e ∗ N batchsize*N batchsizeN, 浮点型; 对应的target形状是 b a t c h s i z e ∗ N batchsize*N batchsizeN,浮点型;该种情况输入和label形状一致,计算方式是label smooth计算

target2 = torch.softmax(torch.tensor([[2,3]]).float(), dim=1) # 值tensor([[0.2689, 0.7311]])
output2 = loss(input, target2)  #值是tensor(0.5822)
# 计算过程等价于如下
output3 = torch.sum(-torch.log(softmax_input)*target2) #值是tensor(0.5822)

备注:针对loss中的reduction参数,CEloss是针对batchsize计算的,不管reduction是none,mean,sum ,里面每一个minibatch都是计算sum loss 如上 t o r c h . s u m ( − t o r c h . l o g ( s o f t m a x i n p u t ) ∗ t a r g e t 2 ) torch.sum(-torch.log(softmax_input)*target2) torch.sum(torch.log(softmaxinput)target2),然后再根据mean或者sum对多个batchsize取mean或者sum

4 Binary Cross Entropy Loss(BCELoss)

该loss是针对多分类,变成n个二分类计算loss,输入input形状是 b a t c h s i z e ∗ N batchsize*N batchsizeN, 浮点型; 对应的target形状是 b a t c h s i z e ∗ N batchsize*N batchsizeN,浮点型;只有这一种形式,对于target的值计算就是采用label smooth计算,示例如下:

loss = nn.BCELoss(reduction='none') #采用none 方便看每个loss值计算结果
input = torch.tensor([[0.1, 0.6]]).float()
target1 = torch.tensor([[0.2,1]]).float()
output = loss(input, target1) #值 tensor([[0.5448, 0.5108]])
# 计算过程等价
#对于第一位计算过程
output0 = -(torch.log(input[0][0])*target1[0][0] + torch.log(1-input[0][0])*(1-target1[0][0])) #值tensor(0.5448)
对于第二位计算过程同上,只是target是1,所以剩了一项计算
output1 = -(torch.log(input[0][1])*target1[0][1]) # 值tensor(0.5108)

备注:BCEloss中reduction 是针对每个minibatch,因为每个minibatch都是多个二分类,所以如果是none则输入每个minibatch中每一个二分类对应的loss,如果是mean或者sum,就是把batchsize*每个minisize中二分类数量,进行mean或者sum

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值