One-class

先贴网络吧

我所用的One-class数据和网络如下:

#以256*64的数据为例子,grid为4,batch为4。
#在训练网络权重时,数据经过每一层后的维度如下:

torch.Size([4, 3, 64, 256])                      
torch.Size([4, 48, 32, 128]) 
torch.Size([4, 96, 33, 64])
torch.Size([4, 96, 33, 64])
torch.Size([4, 96, 33, 64])
torch.Size([4, 192, 34, 32])
torch.Size([4, 192, 34, 32])
torch.Size([4, 192, 34, 32])
torch.Size([4, 384, 17, 16])
torch.Size([4, 384, 17, 16])
torch.Size([4, 384, 17, 16])
torch.Size([4, 768, 8, 8])
torch.Size([4, 768, 8, 8])
torch.Size([4, 768, 8, 8])
torch.Size([4, 1536, 4, 4])
torch.Size([4, 1536, 4, 4])
torch.Size([4, 1536, 4, 4])
torch.Size([4, 768, 8, 8])
torch.Size([4, 768, 8, 8])
torch.Size([4, 768, 8, 8])
torch.Size([4, 384, 8, 16])
torch.Size([4, 384, 8, 16])
torch.Size([4, 192, 8, 32])
torch.Size([4, 192, 8, 32])
torch.Size([4, 96, 16, 64])
torch.Size([4, 96, 16, 64])
torch.Size([4, 48, 32, 128])
torch.Size([4, 48, 32, 128])
torch.Size([4, 3, 64, 256])
torch.Size([4, 3, 64, 256])

#网络如下:
NetG(
  (encoder): Encoder(
    (main): Sequential(
      (initial-conv-3-48): Conv2d(3, 48, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (initial-relu-48): LeakyReLU(negative_slope=0.2, inplace)
      (pyramid-48-96-conv): Conv2d(48, 96, kernel_size=(4, 4), stride=(1, 2), padding=(2, 1), bias=False)
      (pyramid-96-batchnorm): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-96-relu): LeakyReLU(negative_slope=0.2, inplace)
      (pyramid-96-192-conv): Conv2d(96, 192, kernel_size=(4, 4), stride=(1, 2), padding=(2, 1), bias=False)
      (pyramid-192-batchnorm): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-192-relu): LeakyReLU(negative_slope=0.2, inplace)
      (pyramid-192-384-conv): Conv2d(192, 384, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (pyramid-384-batchnorm): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-384-relu): LeakyReLU(negative_slope=0.2, inplace)
      (pyramid-384-768-conv): Conv2d(384, 768, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (pyramid-768-batchnorm): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-768-relu): LeakyReLU(negative_slope=0.2, inplace)
      (pyramid-768-1536-conv): Conv2d(768, 1536, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (pyramid-1536-batchnorm): BatchNorm2d(1536, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-1536-relu): LeakyReLU(negative_slope=0.2, inplace)
    )
  )
  (decoder): Decoder(
    (main): Sequential(
      (initial-1536-768-convt): ConvTranspose2d(1536, 768, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (initial-768-batchnorm): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (initial-768-relu): ReLU(inplace)
      (pyramid-768-384-convt): ConvTranspose2d(768, 384, kernel_size=(3, 4), stride=(1, 2), padding=(1, 1), bias=False)
      (pyramid-384-relu): ReLU(inplace)
      (pyramid-384-192-convt): ConvTranspose2d(384, 192, kernel_size=(3, 4), stride=(1, 2), padding=(1, 1), bias=False)
      (pyramid-192-relu): ReLU(inplace)
      (pyramid-192-96-convt): ConvTranspose2d(192, 96, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (pyramid-96-relu): ReLU(inplace)
      (pyramid-96-48-convt): ConvTranspose2d(96, 48, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (pyramid-48-relu): ReLU(inplace)
      (final-48-3-convt): ConvTranspose2d(48, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (final-3-sigmoid): Sigmoid()
    )
  )
)

然后,网络所以的三个函数:

1.优化函数:adam+动量

   self.optimizer_g = optim.Adam(self.netg.parameters(), lr=self.opt.lr, betas=(self.opt.beta1, 0.999))

2.损失函数:L2loss。


    def l2_loss(self, input, target, size_average=True):
        if size_average:
            return torch.mean(torch.pow((input-target), 2))
        else:
            return torch.pow((input-target), 2)

3.激活函数:激活函数是线性Relu和非线性Relu

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值