MindSpore:The operation does not support the type [kMetaTypeNone, Tesor...

1 报错描述

1.1 系统环境

Environment(Ascend/GPU/CPU): GPU-GTX3090(24G)
Software Environment:
– MindSpore version (source or binary): 1.7.0
– Python version (e.g., Python 3.7.5): 3.8.13
– OS platform and distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
– CUDA version : 11.0

1.2 基本信息

1.2.1脚本

此代码是ConvLSTM从PyTorch迁移到MindSpore的一部分,下面为报错部分

loss = train_network(data, label)

1.2.2报错

部分个人信息做遮挡处理

[WARNING] ME(124028:139969934345984,MainProcess):2022-07-23-20:21:12.940.089 [mindspore/run_check/_check_version.py:140]
 MindSpore version 1.7.0 and cuda version 11.0.221 does not match, please refer to the installation guide for version ma
tching information: https://www.mindspore.cn/install
[CRITICAL] ANALYZER(124028,7f4d4a374700,python):2022-07-23-20:21:21.559.937 [mindspore/ccsrc/frontend/operator/composite
/multitype_funcgraph.cc:160] GenerateFromTypes] The 'sub' operation does not support the type [kMetaTypeNone, Tensor[Flo
at32]].
The supported types of overload function `sub` is: [Tensor, List], [Tensor, Tuple], [List, Tensor], [Tuple, Tensor], [Te
nsor, Number], [Number, Tensor], [Tensor, Tensor], [Number, Number].

Traceback (most recent call last):
  File "main.py", line 194, in <module>
    train()
  File "main.py", line 142, in train
    loss = train_network(data, label)
  File "/home/xxxlab/anaconda2/envs/mindspore/lib/python3.8/site-packages/mindspore/nn/cell.py", line 586, in __call__
    out = self.compile_and_run(*args)
  File "/home/xxxlab/anaconda2/envs/mindspore/lib/python3.8/site-packages/mindspore/nn/cell.py", line 964, in compile_an
d_run
    self.compile(*inputs)
  File "/home/xxxlab/anaconda2/envs/mindspore/lib/python3.8/site-packages/mindspore/nn/cell.py", line 937, in compile
    _cell_graph_executor.compile(self, *inputs, phase=self.phase, auto_parallel_mode=self._auto_parallel_mode)
  File "/home/xxxlab/anaconda2/envs/mindspore/lib/python3.8/site-packages/mindspore/common/api.py", line 1006, in compil
e
    result = self._graph_executor.compile(obj, args_list, phase, self._use_vm_mode())
RuntimeError: mindspore/ccsrc/frontend/operator/composite/multitype_funcgraph.cc:160 GenerateFromTypes] The 'sub' operat
ion does not support the type [kMetaTypeNone, Tensor[Float32]].
The supported types of overload function `sub` is: [Tensor, List], [Tensor, Tuple], [List, Tensor], [Tuple, Tensor], [Te
nsor, Number], [Number, Tensor], [Tensor, Tensor], [Number, Number].

The function call stack (See file '/home/xxxlab/zrj/mindspore/ConvLSTM-PyTorch/conv/rank_0/om/analyze_fail.dat' for more
 details):
# 0 In file /home/xxxlab/anaconda2/envs/mindspore/lib/python3.8/site-packages/mindspore/nn/wrap/cell_wrapper.py(373)
        loss = self.network(*inputs)
               ^
# 1 In file /home/xxxlab/anaconda2/envs/mindspore/lib/python3.8/site-packages/mindspore/nn/wrap/cell_wrapper.py(112)
        return self._loss_fn(out, label)
               ^
# 2 In file /home/xxxlab/anaconda2/envs/mindspore/lib/python3.8/site-packages/mindspore/nn/loss/loss.py(313)
        x = F.square(logits - labels)

2 原因分析以及解决办法

原因直至Mindspore的loss,一开始我也很纳闷,mindspore的源代码我也不能修改,kMetaTypeNone又是什么类型呢?后来参考这篇文章知道了Mindspore分动静态图模式,默认好像是静态图模式,也就是所有的模型参数都要事先确定下来,不然不能构建静态图。

关于静态和动态图的区别,可以参考mindspore官方文档。具体而言,从我的角度就是静态图就是一开始建议完整个模型的计算图,这样子这“张”计算图就可以被重复利用了,不用每次都重新计算,提高计算速度,但这样显而易见的缺点就是可扩展性差。

但是我的模型需要我根据输入进行调整,在对这个报错修改后很多其他地方如MUL操作,也接连出现kMetaTypeNone的错误,这样治标不治本,况且只要模型不改,问题就不可能被解决。

在看了mindspore官方文档后发现mindspore原来是支持动态图的呀!嗨,因为原框架Pytorch就是动态图的,因此只需要将mindspore调整成动态图就行了,具体操作是添加下方代码:

context.set_context(mode=context.PYNATIVE_MODE)

3 总结

多看mindspore官方文档,深入了解框架原理及之间的区别,多利用社区。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
写出下列代码可以实现什么功能: #Img = cv2.undistort(Img, K, Dist) Img = cv2.resize(Img,(240,180),interpolation=cv2.INTER_AREA) #将opencv读取的图片resize来提高帧率 img = cv2.GaussianBlur(Img, (5, 5), 0) imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # 将BGR图像转为HSV lower = np.array([h_min, s_min, v_min]) upper = np.array([h_max, s_max, v_max]) mask = cv2.inRange(imgHSV, lower, upper) # 创建蒙版 指定颜色上下限 范围内颜色显示 否则过滤 kernel_width = 4 # 调试得到的合适的膨胀腐蚀核大小 kernel_height = 4 # 调试得到的合适的膨胀腐蚀核大小 kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernel_width, kernel_height)) mask = cv2.erode(mask, kernel) mask = cv2.dilate(mask, kernel) mask = cv2.dilate(mask, kernel) light_img = mask[:100,:200 ] cv2.imshow("light",light_img) # 输出红绿灯检测结果 Img1 = Img Img = cv2.cvtColor(Img, cv2.COLOR_BGR2RGB) Img2 = Img cropped2 = Img2[70:128, 0:100] h,w,d = cropped2.shape #提取图像的信息 Img = Image.fromarray(Img) Img = ValImgTransform(Img) # 连锁其它变形,变为tesor Img = torch.unsqueeze(Img, dim=0) # 对tesor进行升维 inputImg = Img.float().to(Device) # 让数据能够使用 OutputImg = Unet(inputImg) Output = OutputImg.cpu().numpy()[0] OutputImg = OutputImg.cpu().numpy()[0, 0] OutputImg = (OutputImg * 255).astype(np.uint8) Input = Img.numpy()[0][0] Input = (Normalization(Input) * 255).astype(np.uint8) OutputImg = cv2.resize(OutputImg,(128,128),interpolation=cv2.INTER_AREA) # 将opencv读取的图片resize来提高帧率 ResultImg = cv2.cvtColor(Input, cv2.COLOR_GRAY2RGB) ResultImg[..., 1] = OutputImg cropped = ResultImg[80:128, 20:100] cropped1 = OutputImg[80:128, 20:100] cv2.imshow("out", cropped1)#显示处理后的图像 cv2.imshow("Img2", Img2) cv2.imshow("Img0", cropped)#显示感兴趣区域图像 print(reached)
最新发布
07-09
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值