BrokenPipeError: [Errno 32] Broken pipe

<deep learning with pytorch>训练一个分类器

 

问题:在show 训练集的图像的时候出现报错

我的代码

train.py
import torch
import torchvision
import torchvision.transforms as transforms

#因为torchvision的数据集的输出格式是PIL图像,范围在[0,1]。我们要把它转化成tensor的标准范围[-1,1]
#Transforms are common image transformations.
#They can be chained together using Compose.
#Additionally, there is the torchvision.transforms.functional module.

transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data',train=True,
                                          download=True,transform=transform)
trainloader = torch.utils.data.DataLoader(trainset,batch_size=4,
                                            shuffle = True,num_workers=1)
testset = torchvision.datasets.CIFAR10(root='./data',train=False,
                                         download=True,transform=transform)
testloader = torch.utils.data.DataLoader(testset,batch_size=4,
                                             shuffle=False,num_workers=1)
classes=(
    'plane','car','bird','cat','deer','dog','frog','horse','ship','truck'
)
show.py
import torchvision
import matplotlib.pyplot as plt
import numpy as np
from train import trainloader
from train import classes
def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))
    plt.show()

dataiter = iter(trainloader)
images, labels = dataiter.next()

imshow(torchvision.utils.make_grid(images))

print(' '.join('%5s' % classes[labels[j]] for j in range(4)))

报错如下:

Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "D:\Anaconda\lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "D:\Anaconda\lib\multiprocessing\spawn.py", line 114, in _main
    prepare(preparation_data)
  File "D:\Anaconda\lib\multiprocessing\spawn.py", line 225, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "D:\Anaconda\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
    run_name="__mp_main__")
  File "D:\Anaconda\lib\runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "D:\Anaconda\lib\runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "D:\Anaconda\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\xj\kaggle-cats-and-dogs\overfeat\data\show.py", line 14, in <module>
    dataiter = iter(trainloader)
  File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 193, in __iter__
    return _DataLoaderIter(self)
  File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 469, in __init__
    w.start()
  File "D:\Anaconda\lib\multiprocessing\process.py", line 112, in start
    self._popen = self._Popen(self)
  File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 46, in __init__
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "D:\Anaconda\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
    _check_not_importing_main()
  File "D:\Anaconda\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
    is not going to be frozen to produce an executable.''')
RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
Traceback (most recent call last):
  File "C:/Users/xj/kaggle-cats-and-dogs/overfeat/data/show.py", line 14, in <module>
    dataiter = iter(trainloader)
  File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 193, in __iter__
    return _DataLoaderIter(self)
  File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 469, in __init__
    w.start()
  File "D:\Anaconda\lib\multiprocessing\process.py", line 112, in start
    self._popen = self._Popen(self)
  File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
    reduction.dump(process_obj, to_child)
  File "D:\Anaconda\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe

Process finished with exit code 1

解决办法:

trainset = torchvision.datasets.CIFAR10(root='./data',train=True,
                                          download=True,transform=transform)
trainloader = torch.utils.data.DataLoader(trainset,batch_size=4,
                                            shuffle = True,num_workers=1)
testset = torchvision.datasets.CIFAR10(root='./data',train=False,
                                         download=True,transform=transform)
testloader = torch.utils.data.DataLoader(testset,batch_size=4,
                                             shuffle=False,num_workers=1)

把train.py里的 num_workers=1改为0 

查阅资料发现如果参数为0 就表示数据导入在主进程中进行,如果大于0 就表示在多个进程中进行,可以加快数据的导入

但是这里用大于0的数字就会报错,改成0就可以!

参考文献:

https://github.com/pytorch/pytorch/issues/2341

https://blog.csdn.net/u014380165/article/details/79058479

 

评论 13
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值