CycleGAN改动中的NotImplementedError错误

CycleGAN改动中的NotImplementedError错误

在修改完代码之后准备运行,发现代码报错

Traceback (most recent call last):
  File "train.py", line 52, in <module>
    model.optimize_parameters()   # calculate loss functions, get gradients, update network weights
  File "/home/deep/1-POJECT/zpc/pytorch-CycleGAN-and-pix2pix/models/cycle_gan_model.py", line 183, in optimize_parameters
    self.forward()      # compute fake images and reconstruction images.
  File "/home/deep/1-POJECT/zpc/pytorch-CycleGAN-and-pix2pix/models/cycle_gan_model.py", line 114, in forward
    self.fake_B = self.netG_A(self.real_A)  # G_A(A)
  File "/home/deep/anaconda3/envs/zpc-cyclegan/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/deep/anaconda3/envs/zpc-cyclegan/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 165, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/home/deep/anaconda3/envs/zpc-cyclegan/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/deep/1-POJECT/zpc/pytorch-CycleGAN-and-pix2pix/models/networks.py", line 473, in forward
    return self.model(input)
  File "/home/deep/anaconda3/envs/zpc-cyclegan/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/deep/anaconda3/envs/zpc-cyclegan/lib/python3.8/site-packages/torch/nn/modules/module.py", line 201, in _forward_unimplemented
    raise NotImplementedError
NotImplementedError

在通过各方面查阅搜寻修改,

看起来您遇到了一个错误,但是无法从提供的代码片段中确定具体的问题。根据您提供的内容,似乎是在运行训练脚本时出现了错误。
根据报错信息,这个错误可能是由于在模型前向传播的过程中出现了未实现的部分而导致的。特别是在 forward 方法中,有可能有部分功能没有被正确实现。
此外,另一种可能是参数传递方面的问题,比如输入的维度不匹配等。

经历了代码复原,删除注意力,删除残差块,等操作
最终确定罪魁祸首为forword

class UnetSkipConnectionBlock(nn.Module):
    def __init__(self, outer_nc, inner_nc, input_nc=None,
                 submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False,
                 use_attention=False):
        super(UnetSkipConnectionBlock, self).__init__()  #  继承
        self.use_attention = use_attention
        if self.use_attention:
            self.attention = Attention(outer_nc)
        self.outermost = outermost        # 设置外部最外层
        if type(norm_layer) == functools.partial:            # 判断norm_layer是否为functools.partial类型
            use_bias = norm_layer.func == nn.InstanceNorm2d  # 如果norm_layer的func属性为nn.InstanceNorm2d,则use_bias为True
        else:
            use_bias = norm_layer == nn.InstanceNorm2d       # 否则use_bias为norm_layer是否为nn.InstanceNorm2d
        if input_nc is None:               # 如果input_nc为None,则input_nc等于outer_nc
            input_nc = outer_nc
        downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=4,      # 定义下采样卷积层
                             stride=2, padding=1, bias=use_bias)
        downrelu = nn.LeakyReLU(0.2, True) # 定义下采样ReLU激活函数
        downnorm = norm_layer(inner_nc)    # 定义下采样norm层
        uprelu = nn.ReLU(True)             # 定义上采样ReLU激活函数
        upnorm = norm_layer(outer_nc)      # 定义上采样norm层

        if outermost:
            upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,
                                        kernel_size=4, stride=2,
                                        padding=1)
            down = [downconv]
            up = [uprelu, upconv, nn.Tanh()]
            model = down + [submodule] + up
        elif innermost:
            upconv = nn.ConvTranspose2d(inner_nc, outer_nc,
                                        kernel_size=4, stride=2,
                                        padding=1, bias=use_bias)
            down = [downrelu, downconv]
            up = [uprelu, upconv, upnorm]
            model = down + up
        else:
            upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,
                                        kernel_size=4, stride=2,
                                        padding=1, bias=use_bias)
            down = [downrelu, downconv, downnorm]
            up = [uprelu, upconv, upnorm]

            if use_dropout:
                model = down + [submodule] + up + [nn.Dropout(0.5)]
            else:
                model = down + [submodule] + up

        self.model = nn.Sequential(*model)
# 定义forward函数,用于前向传播
def forward(self, x):
        if self.outermost:        # 判断是否是外层函数
            return self.model(x)
        else:   # torch.cat是合并的意思,第二个参数是1,说明是横向合并
            return torch.cat([x, self.model(x)], 1)

可能在之前修改代码的时候,不小心将forword前面的空格删除了,导致了后续的一系列错误发生

    def forward(self, x):
        if self.outermost:        # 判断是否是外层函数
            return self.model(x)
        else:   # torch.cat是合并的意思,第二个参数是1,说明是横向合并
            return torch.cat([x, self.model(x)], 1)

将空格恢复后,问题解决,代码成功运行

  • 4
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值