域自适应目标检测

域自适应目标检测总结:https://www.yuque.com/weijiawu/research/vp3v2y
其他域自适应总结:https://zhuanlan.zhihu.com/p/107120177
https://zhuanlan.zhihu.com/p/53359505

Adversarial-based

什么是Gradient Reversal Layer
https://www.zhihu.com/question/266710153
1.Unsupervised Domain Adaptation by Backpropagation
论文:https://arxiv.org/abs/1409.7495
代码:https://github.com/fungtion/DANN_py3
解析:https://blog.csdn.net/weixin_37993251/article/details/89354111

2.Domain Adaptive Faster R-CNN for Object Detection in the Wild
论文:https://arxiv.org/abs/1803.03243
代码:https://github.com/krumo/Domain-Adaptive-Faster-RCNN-PyTorch
解析:https://zhuanlan.zhihu.com/p/59474327https://www.cnblogs.com/VincentLee/p/13175536.html

3.Adapting object detectors via selective cross-domain alignment
论文:http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhu_Adapting_Object_Detectors_via_Selective_Cross-Domain_Alignment_CVPR_2019_paper.pdf
代码:https://github.com/xinge008/SCDA
解析:https://zhuanlan.zhihu.com/p/111600921

4.Strong-Weak Distribution Alignment for Adaptive Object Detection
论文:https://arxiv.org/abs/1812.04798
代码:https://github.com/VisionLearningGroup/DA_Detection
解析:https://blog.csdn.net/djh123456021/article/details/88087359

5.Exploring Categorical Regularization for Domain Adaptive Object Detection
论文:https://arxiv.org/abs/2003.09152
代码:https://github.com/Megvii-Nanjing/CR-DA-DET
解析:https://megvii.blog.csdn.net/article/details/106726547

Reconstruction-based

1.Cross-Domain Car Detection Using Unsupervised Image-to- Image Translation: From Day to Night
论文:https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8852008
代码:https://github.com/LCAD-UFES/publications-arruda-ijcnn-2019
解析:

Hybrid-based

1.Self-Training and Adversarial Background Regularization for Unsupervised Domain Adaptive One-Stage Object Detection
论文:https://arxiv.org/pdf/1909.00597.pdf
解析:https://zhuanlan.zhihu.com/p/157896692

一些代码实现:

class GRL(torch.autograd.Function):
    @staticmethod
    def forward(ctx, x, alpha):
        ctx.alpha = alpha
        return x.view_as(x)
    
    @staticmethod
    def backward(ctx,grad_output):
        output = grad_output.neg() * ctx.alpha
        return output,None

gradient_reversal = GRL.apply

class GradientReversalLayer(torch.nn.Module):
    def __init__(self, weight):
        super(GradientReversalLayer, self).__init__()
        self.weight = weight

    def forward(self, input):
        return gradient_reversal(input, self.weight)

class DiscriminatorLayer(nn.Module):
    def __init__(self, input_channel, output_channel):
        super(DiscriminatorLayer, self).__init__()
        self.conv = nn.Conv2d(in_channels=input_channel,out_channels=256,kernel_size=3,stride=2,padding=1)
        self.bn2 = nn.BatchNorm2d(256)
        self.relu2 = nn.ReLU(True)
        self.pool = nn.AdaptiveAvgPool2d((1, 1))
        self.liner1 = nn.Linear(256, 100)
        self.bn1 = nn.BatchNorm1d(100)
        self.relu1 = nn.ReLU(True)
        self.liner2 = nn.Linear(100, 2)
        self.softmax = nn.LogSoftmax(dim=1)

    def forward(self, x,alpha):
        x = self.conv(x)
        x = self.bn2(x)
        x = self.relu2(x)
        x = self.pool(x)
        x = torch.flatten(x, 1)
        x = GRL.apply(x,alpha)
        x = self.liner1(x)
        x = self.bn1(x)
        x = self.relu1(x)
        x = self.liner2(x)
        x = self.softmax(x)
        return x

class DiscriminatorLayer_Global(nn.Module):
    def __init__(self, input_channel, output_channel,context=False):
        super(DiscriminatorLayer_Global, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=input_channel,out_channels=128,kernel_size=3,stride=2,padding=1)
        self.bn1 = nn.BatchNorm2d(128)
        self.conv2 = nn.Conv2d(in_channels=128,out_channels=128,kernel_size=3,stride=2,padding=1)
        self.bn2 = nn.BatchNorm2d(128)
        self.conv3 = nn.Conv2d(in_channels=128,out_channels=64,kernel_size=3,stride=1,padding=1)
        self.bn3 = nn.BatchNorm2d(64)
        self.fc = nn.Linear(64,2)
        self.context = context

    def forward(self, x,alpha,targets=True):
        x = GRL.apply(x,alpha)
        x = F.dropout(F.relu(self.bn1(self.conv1(x))),training=self.training)
        x = F.dropout(F.relu(self.bn2(self.conv2(x))),training=self.training)
        x = F.dropout(F.relu(self.bn3(self.conv3(x))),training=self.training)
        x = F.avg_pool2d(x,x.size(2),x.size(3))
        x = x.view(-1,64)
        if self.context:
          feat = x
        x = self.fc(x)
        if self.context:
          return x,feat
        else:
          return x

class DiscriminatorLayer_Local(nn.Module):
    def __init__(self, input_channel,context=False):
        super(DiscriminatorLayer_Local, self).__init__()
        self.conv1 = nn.Conv2d(4, 64, kernel_size=1, stride=1,
                  padding=0, bias=False)
        self.conv2 = nn.Conv2d(64, 64, kernel_size=1, stride=1,
                               padding=0, bias=False)
        self.conv3 = nn.Conv2d(64, 1, kernel_size=1, stride=1,
                               padding=0, bias=False)
        self.context = context

    def forward(self, x,alpha,targets=True):
        x = GRL.apply(x,alpha)
        x = F.relu(self.conv1(x))
        x = F.relu(self.conv2(x))
        if self.context:
          feat = F.avg_pool2d(x, (x.size(2), x.size(3)))
          x = self.conv3(x)
          return torch.sigmoid(x),feat
        else:
          x = self.conv3(x)
          return torch.sigmoid(x)
        return x
  • 4
    点赞
  • 58
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值