琐碎知识点
Q1
here the C-IOU loss computation also need the positive index?
in my understanding, if we still consider positive sample during c-iou loss compute, why the paper said this kind of loss can improve the conditions: anchors with none overlaps with gt compared with IOU-Loss and G-IOU Loss, thanks
if self.loss == 'SmoothL1':
loss_l = F.smooth_l1_loss(loc_p, loc_t, reduction='sum')
else:
giou_priors = priors.data.unsqueeze(0).expand_as(loc_data)
loss_l = self.gious(loc_p,loc_t,giou_priors[pos_idx].view(-1, 4))
answer:
As we know, convolution is a sequence of local operations. Its receptive field is limited. During bbox regression, we cannot learn the box that is too far away. If we force models to learn distant objects, it will damage the network. So in practice, it is necessary to select positive samples. And this is the reason that there is limited room for improvement of all these bbox regression losses.