在paddle上用yolov3训练自己数据集时出现的问题

基于paddle学习目标检测算法yolov3的基础上用yolov3训练自己的数据集时,发现paddle有报错:

2021-07-18 17:20:02[TRAIN]epoch 0, iter 8, output loss: [898.1042]

/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/PIL/Image.py:2800: DecompressionBombWarning: Image size (168416415 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
  DecompressionBombWarning,

2021-07-18 17:20:08[TRAIN]epoch 0, iter 9, output loss: [1166.028]
2021-07-18 17:20:10[TRAIN]epoch 0, iter 10, output loss: [326.97525]

---------------------------------------------------------------------------KeyboardInterrupt                         Traceback (most recent call last)<ipython-input-67-024cd613b6b1> in <module>
     39         MAX_EPOCH = 200
     40         for epoch in range(MAX_EPOCH):
---> 41             for i, data in enumerate(train_loader()):
     42                 img, gt_boxes, gt_labels, img_scale = data
     43                 gt_scores = np.ones(gt_labels.shape).astype('float32')
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/reader/decorator.py in xreader()
    446         finish = 1
    447         while finish < process_num:
--> 448             sample = out_queue.get()
    449             if isinstance(sample, XmapEndSignal):
    450                 finish += 1
/opt/conda/envs/python35-paddle120-env/lib/python3.7/queue.py in get(self, block, timeout)
    168             elif timeout is None:
    169                 while not self._qsize():
--> 170                     self.not_empty.wait()
    171             elif timeout < 0:
    172                 raise ValueError("'timeout' must be a non-negative number")
/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py in wait(self, timeout)
    294         try:    # restore state no matter what (e.g., KeyboardInterrupt)
    295             if timeout is None:
--> 296                 waiter.acquire()
    297                 gotit = True
    298             else:
KeyboardInterrupt: 

坑有点大,继续改:

---------------------------------------------------------------------------EnforceNotMet                             Traceback (most recent call last)<ipython-input-162-c16e7c3f4792> in <module>
      2 import numpy as np
      3 with fluid.dygraph.guard():
----> 4     backbone = DarkNet53_conv_body(is_test=False)
      5     x = np.random.randn(1, 3, 640, 640).astype('float32')
      6     x = to_variable(x)
<ipython-input-161-76715bce0d06> in __init__(self, is_test)
    149             stride=1,
    150             padding=1,
--> 151             is_test=is_test)
    152 
    153         # 下采样,使用stride=2的卷积来实现
<ipython-input-161-76715bce0d06> in __init__(self, ch_in, ch_out, filter_size, stride, groups, padding, act, is_test)
     33                 initializer=fluid.initializer.Normal(0., 0.02)),
     34             bias_attr=False,
---> 35             act=None)
     36 
     37         self.batch_norm = BatchNorm(
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/nn.py in __init__(self, num_channels, num_filters, filter_size, stride, padding, dilation, groups, param_attr, bias_attr, use_cudnn, act, dtype)
    215             shape=filter_shape,
    216             dtype=self._dtype,
--> 217             default_initializer=_get_default_param_initializer())
    218 
    219         self.bias = self.create_parameter(
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py in create_parameter(self, shape, attr, dtype, is_bias, default_initializer)
    260             temp_attr = None
    261         return self._helper.create_parameter(temp_attr, shape, dtype, is_bias,
--> 262                                              default_initializer)
    263 
    264     # TODO: Add more parameter list when we need them
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layer_helper_base.py in create_parameter(self, attr, shape, dtype, is_bias, default_initializer, stop_gradient, type)
    345                 type=type,
    346                 stop_gradient=stop_gradient,
--> 347                 **attr._to_kwargs(with_initializer=True))
    348         else:
    349             self.startup_program.global_block().create_parameter(
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py in create_parameter(self, *args, **kwargs)
   2568                 pass
   2569             else:
-> 2570                 initializer(param, self)
   2571         param.stop_gradient = False
   2572         return param
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/initializer.py in __call__(self, var, block)
    340                 "use_mkldnn": False
    341             },
--> 342             stop_gradient=True)
    343 
    344         if var.dtype == VarDesc.VarType.FP16:
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py in _prepend_op(self, *args, **kwargs)
   2668                                        kwargs.get("outputs", {}), attrs
   2669                                        if attrs else {},
-> 2670                                        kwargs.get("stop_gradient", False))
   2671         else:
   2672             op_desc = self.desc._prepend_op()
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/tracer.py in trace_op(self, type, inputs, outputs, attrs, stop_gradient)
     41         self.trace(type, inputs, outputs, attrs,
     42                    framework._current_expected_place(), self._train_mode and
---> 43                    not stop_gradient)
     44 
     45     def train_mode(self):
EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::platform::stream::CUDAStream::Init(paddle::platform::Place const&, paddle::platform::stream::Priority const&)
3   paddle::platform::CUDAContext::CUDAContext(paddle::platform::CUDAPlace const&, paddle::platform::stream::Priority const&)
4   paddle::platform::CUDADeviceContext::CUDADeviceContext(paddle::platform::CUDAPlace)
5   std::_Function_handler<std::unique_ptr<paddle::platform::DeviceContext, std::default_delete<paddle::platform::DeviceContext> > (), std::reference_wrapper<std::_Bind_simple<paddle::platform::EmplaceDeviceContext<paddle::platform::CUDADeviceContext, paddle::platform::CUDAPlace>(std::map<paddle::platform::Place, std::shared_future<std::unique_ptr<paddle::platform::DeviceContext, std::default_delete<paddle::platform::DeviceContext> > >, std::less<paddle::platform::Place>, std::allocator<std::pair<paddle::platform::Place const, std::shared_future<std::unique_ptr<paddle::platform::DeviceContext, std::default_delete<paddle::platform::DeviceContext> > > > > >*, paddle::platform::Place)::{lambda()#1} ()> > >::_M_invoke(std::_Any_data const&)
6   std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<std::unique_ptr<paddle::platform::DeviceContext, std::default_delete<paddle::platform::DeviceContext> > >, std::__future_base::_Result_base::_Deleter>, std::unique_ptr<paddle::platform::DeviceContext, std::default_delete<paddle::platform::DeviceContext> > > >::_M_invoke(std::_Any_data const&)
7   std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>&, bool&)
8   std::__future_base::_Deferred_state<std::_Bind_simple<paddle::platform::EmplaceDeviceContext<paddle::platform::CUDADeviceContext, paddle::platform::CUDAPlace>(std::map<paddle::platform::Place, std::shared_future<std::unique_ptr<paddle::platform::DeviceContext, std::default_delete<paddle::platform::DeviceContext> > >, std::less<paddle::platform::Place>, std::allocator<std::pair<paddle::platform::Place const, std::shared_future<std::unique_ptr<paddle::platform::DeviceContext, std::default_delete<paddle::platform::DeviceContext> > > > > >*, paddle::platform::Place)::{lambda()#1} ()>, std::unique_ptr<paddle::platform::DeviceContext, std::default_delete<paddle::platform::DeviceContext> > >::_M_run_deferred()
9   paddle::platform::DeviceContextPool::Get(paddle::platform::Place const&)
10  paddle::imperative::PreparedOp paddle::imperative::PrepareOpImpl<paddle::imperative::VarBase>(paddle::imperative::details::NameVarMapTrait<paddle::imperative::VarBase>::Type const&, paddle::imperative::details::NameVarMapTrait<paddle::imperative::VarBase>::Type const&, paddle::framework::OperatorWithKernel const&, paddle::platform::Place, std::unordered_map<std::string, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> > > > const&)
11  paddle::imperative::PreparedOp::Prepare(std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&, paddle::framework::OperatorWithKernel const&, paddle::platform::Place const&, std::unordered_map<std::string, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> > > > const&)
12  paddle::imperative::OpBase::Run(paddle::framework::OperatorBase const&, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&, std::unordered_map<std::string, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> > > > const&, paddle::platform::Place const&)
13  paddle::imperative::Tracer::TraceOp(std::string const&, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&, std::unordered_map<std::string, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> > > >, paddle::platform::Place const&, bool)

----------------------
Error Message Summary:
----------------------
ExternalError:  Cuda error(2), out of memory.
  [Advise: The API call failed because it was unable to allocate enough memory to perform the requested operation. ] at (/paddle/paddle/fluid/platform/stream/cuda_stream.cc:36)

内存崩了 … 心态也崩了 …
一天以后改出来个能跑起来的版本,但是在第一轮训练的时候卡在了这在这里插入图片描述
等了它两天,还是不动
下载个yolo在自己电脑上跑一下试试吧->https://blog.csdn.net/p2469054392/article/details/119023962

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
根据引用\[1\]中的代码,使用SuperGradients训练模型的步骤是通过trainer.train()函数来实现的。在该函数中,需要传入模型、训练参数、训练数据和验证数据。 对于钢铁数据集训练参数设置,可以参考引用\[2\]中的方案。在该方案中,使用了FasterRCNN+Swin模型进行钢铁缺陷检测。可以根据该方案中的参数设置进行调整。 此外,引用\[3\]中提到了使用COCO数据集进行训练的方法。可以将钢铁数据集转换为COCO数据集的格式,并使用paddledetection工具中的x2coco.py脚本进行转换。不过,如果你已经有了VOC格式的数据集,也可以直接使用VOC格式进行训练。 综上所述,针对yolov5训练钢铁数据集的参数设置,你可以参考FasterRCNN+Swin模型的参数设置,并根据实际情况选择使用COCO数据集格式或VOC数据集格式进行训练。 #### 引用[.reference_title] - *1* [一步一步介绍如何基于 YOLO-NAS 训练钢铁表面的缺陷的数据集+目标检测的代码实现](https://blog.csdn.net/tianqiquan/article/details/131096995)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* *3* [paddle学习赛——钢铁目标检测yolov5、ppyoloe+,Faster-RCNN)](https://blog.csdn.net/qq_56591814/article/details/127219469)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值