pytorch学习错误大全

目录

1:pytorch 自带模型加载:会出现报错,改成注释的新加载方法就不会报错

2:理解dataset , dataloader方法

3:if __name__ == '__main__'


import torch
import torchvision.models
import torchvision.models as models
#
# vgg16 =torchvision.models.vgg16()
# weight_dict = torch.load(r'C:\Users\Administrator\.cache\torch\hub\checkpoints\vgg16-397923af.pth')
# vgg16.load_state_dict(weight_dict,strict=False)
# x = torch.randn(1,3,512,512)
# y = vgg16(x)
# print(y.shape)

vgg16 = models.vgg16(pretrained=True)
x = torch.randn(1,3,512,512)
y = vgg16(x)
print(y.shape)

 mobilenet_v2 = models.mobilenet_v2()
        weight_dict = torch.load(r'C:\Users\Administrator\.cache\torch\hub\checkpoints\mobilenet_v2-b0353104.pth')
        mobilenet_v2.load_state_dict(weight_dict, strict=False)

 """Dataset是一个抽象类,比较特殊,不可以直接实例化,必须构造子类,用子类来进行实例化,
Dataset可以被继承,并且需要重写__getitem__,__len__(self),__init__(self,path),
__getitem__方法是通过索引idx,找到相应的图片路径imagepath,通过Image.open(imagepath).convert('RGB'),读取
出图片的numpy数组:image,通过构造的transfrm方法 对image进行预处理,转化成tensord类型的张量,
并返回张量image 一般这个时候image都是三维的张量 [C,H,W]
path = ''
data = mydats(path)
image = data[0]
这样就可以根据idx = 0  这里0会传递到__getitem__当中的idx,调用getitem方法 返回一个张量
"""
"""torch.utils.data.DataLoader()
常用参数的介绍: 
1.dataset(Dataset): 传入的数据集(Dataset对象),通常是自定义的实例化的对象
这个就是PyTorch已有的数据读取接口(比如torchvision.datasets.ImageFolder)
或者自定义的数据接口的输出,该输出要么是torch.utils.data.Dataset类的对象,要么是继承自torch.utils.data.Dataset类的自定义类的对象
不能理解为一个数据集,而要理解为一个继承自torch.utils.data.Dataset类的自定义子类经过实例化的对象,
该对象里面自定义的def __getitem__(self, idx)方法,可以通过idx找到并返回数据(张量)
2.batch_size(int, optional): 每个batch有多少个样本
3.shuffle(bool, optional): 在每个epoch开始的时候,对数据进行重新排序
4.num_workers (int, optional): 这个参数决定了有几个进程来处理data loading。0意味着所有的数据都会被load进主进程。(默认为0)
"""

""""在这个基础上可以在自行修改,但是框架差不多"""

from torch.utils.data import Dataset
from torchvision import transforms
import torch
import os
from PIL import Image


class mydats(Dataset):
    def __init__(self, path):
        self.file_path = path
        self.images_path = []
        image_names_list = os.listdir(os.path.join(self.path))
        self.images_path = [os.path.join(self.file_path, name) for name in image_names_list]

    def __getitem__(self, idx):
        transform = transforms.Compose([
            transforms.RandomCrop(256),
            transforms.ToTensor(),
        ])
        image_path = self.images_path[idx]
        image = Image.open(image_path).convert('RGB')
        image = transform(image)

        return image

    def __len__(self):
        return len(self.images_path)


path = r'D:\研究生学习\第九章:基于CycleGan开源项目实战图像合成\pytorch-CycleGAN-and-pix2pix\datasets\horse2zebra\trainA'
data = mydats(path)
image = data[0]
print(image.shape)
dataload = torch.utils.data.DataLoader(data, batch_size=4, shuffle=True)
for index, image in enumerate(dataload):
    print(image.shape)
    exit()

 当getitem方法返回多个张量时:

 # dataset是一个实例化的数据集对象,可以指定index,调用getitem方法,返回指定的数据,这里的dataset每次返回的是一个包含x,y的元组;
    # x,y= dataset[0]
    dataset = MaskDataset(data_paths=data_paths, label_paths=label_paths, img_size=args.image_size)
    #enumerate 是 Python 内置函数,用于将一个可迭代对象(如列表、元组或迭代器)组合成一个索引序列,同时返回索引和值
    #这里返回的第一个元素是当前批次的索引,第二个元素是当前批次的数据(可以是一个张量,也可以是一个包含几个张量的元组)
    #i,(A,B) = next(iter(enumerate(dataloader)))
    dataloader = DataLoader(dataset, batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers)

 #from glob import glob    使用glob函数得到指定文件夹当中所有png图片的绝对路径
    current_directory = os.path.dirname(current_file_path)
    train_file_path = os.path.join(current_directory,'dataset','ReTree','my_train')
    data_paths = glob(os.path.join(train_file_path,'images','*.png'))
    label_paths = glob(os.path.join(train_file_path,'labels','*.png'))

3路径不存在问题:

import sys
sys.path.append('E:\image generate\pix2pixHD-master\models')
# from models.base_model import BaseModel
# from models import networks

将报错的py文件的上一级文件当中引入要调用的包的路径

MultiscaleDiscriminator 判别器

class MultiscaleDiscriminator(nn.Module):
    def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, 
                 use_sigmoid=False, num_D=3, getIntermFeat=False):
        super(MultiscaleDiscriminator, self).__init__()
        self.num_D = num_D
        self.n_layers = n_layers
        self.getIntermFeat = getIntermFeat
     
        for i in range(num_D):
            netD = NLayerDiscriminator(input_nc, ndf, n_layers, norm_layer, use_sigmoid, getIntermFeat)
            if getIntermFeat:                                
                for j in range(n_layers+2):
                    setattr(self, 'scale'+str(i)+'_layer'+str(j), getattr(netD, 'model'+str(j)))                                   
            else:
                setattr(self, 'layer'+str(i), netD.model)

        self.downsample = nn.AvgPool2d(3, stride=2, padding=[1, 1], count_include_pad=False)

    def singleD_forward(self, model, input):
        if self.getIntermFeat:
            result = [input]
            for i in range(len(model)):
                result.append(model[i](result[-1]))
            return result[1:]
        else:
            return [model(input)]

    def forward(self, input):        
        num_D = self.num_D
        result = []
        input_downsampled = input
        for i in range(num_D):
            if self.getIntermFeat:
                model = [getattr(self, 'scale'+str(num_D-1-i)+'_layer'+str(j)) for j in range(self.n_layers+2)]
            else:
                model = getattr(self, 'layer'+str(num_D-1-i))
            result.append(self.singleD_forward(model, input_downsampled))
            if i != (num_D-1):
                input_downsampled = self.downsample(input_downsampled)
        return result


# Defines the PatchGAN discriminator with the specified arguments.
class NLayerDiscriminator(nn.Module):
    def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, use_sigmoid=False, getIntermFeat=False):
        super(NLayerDiscriminator, self).__init__()
        self.getIntermFeat = getIntermFeat
        self.n_layers = n_layers

        kw = 4
        padw = int(np.ceil((kw-1.0)/2))
        sequence = [[nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)]]

        nf = ndf
        for n in range(1, n_layers):
            nf_prev = nf
            nf = min(nf * 2, 512)
            sequence += [[
                nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=2, padding=padw),
                norm_layer(nf), nn.LeakyReLU(0.2, True)
            ]]

        nf_prev = nf
        nf = min(nf * 2, 512)
        sequence += [[
            nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=1, padding=padw),
            norm_layer(nf),
            nn.LeakyReLU(0.2, True)
        ]]

        sequence += [[nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw)]]

        if use_sigmoid:
            sequence += [[nn.Sigmoid()]]

        if getIntermFeat:
            for n in range(len(sequence)):
                setattr(self, 'model'+str(n), nn.Sequential(*sequence[n]))
        else:
            sequence_stream = []
            for n in range(len(sequence)):
                sequence_stream += sequence[n]
            self.model = nn.Sequential(*sequence_stream)

    def forward(self, input):
        if self.getIntermFeat:
            res = [input]
            for n in range(self.n_layers+2):
                model = getattr(self, 'model'+str(n))
                res.append(model(res[-1]))
            return res[1:]
        else:
            return self.model(input)        



if __name__ == '__main__':
    netD = MultiscaleDiscriminator(input_nc=3, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d,
                 use_sigmoid=False, num_D=3, getIntermFeat=True)
    print(netD)
    input = torch.randn(1,3,512,512)
    output = netD(input)
    for i in range(2):
        for j in range(2+3):
            print(f'第scale{i}layer{j}的输出:{output[i][j].shape}')

D:\Anaconda\envs\study\python.exe "E:\image generate\pix2pixHD-master\models\networks.py" 
MultiscaleDiscriminator(
  (scale0_layer0): Sequential(
    (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
    (1): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (scale0_layer1): Sequential(
    (0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
    (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (scale0_layer2): Sequential(
    (0): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
    (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (scale0_layer3): Sequential(
    (0): Conv2d(256, 512, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
    (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (scale0_layer4): Sequential(
    (0): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
  )
  (scale1_layer0): Sequential(
    (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
    (1): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (scale1_layer1): Sequential(
    (0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
    (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (scale1_layer2): Sequential(
    (0): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
    (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (scale1_layer3): Sequential(
    (0): Conv2d(256, 512, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
    (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (scale1_layer4): Sequential(
    (0): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
  )
  (scale2_layer0): Sequential(
    (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
    (1): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (scale2_layer1): Sequential(
    (0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
    (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (scale2_layer2): Sequential(
    (0): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))
    (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (scale2_layer3): Sequential(
    (0): Conv2d(256, 512, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
    (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2, inplace=True)
  )
  (scale2_layer4): Sequential(
    (0): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))
  )
  (downsample): AvgPool2d(kernel_size=3, stride=2, padding=[1, 1])
)
第scale0layer0的输出:torch.Size([1, 64, 257, 257])
第scale0layer1的输出:torch.Size([1, 128, 129, 129])
第scale0layer2的输出:torch.Size([1, 256, 65, 65])
第scale0layer3的输出:torch.Size([1, 512, 66, 66])
第scale0layer4的输出:torch.Size([1, 1, 67, 67])

第scale1layer0的输出:torch.Size([1, 64, 129, 129])
第scale1layer1的输出:torch.Size([1, 128, 65, 65])
第scale1layer2的输出:torch.Size([1, 256, 33, 33])
第scale1layer3的输出:torch.Size([1, 512, 34, 34])
第scale1layer4的输出:torch.Size([1, 1, 35, 35])

第scale2layer0的输出:torch.Size([1, 64, 65, 65])
第scale2layer1的输出:torch.Size([1, 128, 33, 33])
第scale2layer2的输出:torch.Size([1, 256, 17, 17])
第scale2layer3的输出:torch.Size([1, 512, 18, 18])
第scale2layer4的输出:torch.Size([1, 1, 19, 19])

Process finished with exit code 0
 

在Python中,if __name__ == '__main__': 是一个常见的代码块,用于确定一个Python脚本是作为独立的程序运行还是被导入为一个模块。

解释如下:

  • __name__ 是一个内置变量,它代表当前模块的名字。
  • 当一个Python文件(例如my_script.py)被直接运行时,Python解释器会将其当作主程序,此时__name__的值会被设置为'__main__'
  • 但是,如果这个my_script.py文件被另一个Python文件导入为一个模块(例如import my_script),那么__name__的值就会被设置为该模块的名字(即my_script)。

因此,if __name__ == '__main__': 这个条件判断是用来确定这个Python文件是作为一个独立的程序运行还是被导入为模块。如果是作为独立的程序运行,那么if __name__ == '__main__': 下的代码块就会被执行;如果是被导入为模块,那么该代码块就不会执行。

这种结构的一个主要用途是允许一个Python文件既可以作为脚本直接运行,也可以作为模块被其他脚本导入,而不需要修改任何代码。

conda install mpi4py  用conda 不要用pip

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值