OSV_q Expected all tensors to be on the same device, but found at least two devices, cuda:0

http://t.csdn.cn/OAaJR

====>> Sun Mar 13 16:37:38 2022   Pass time: 0:00:31.486364
Traceback (most recent call last):
  File "C:/Users/shang/Desktop/STDN_LI/TVGnet/OSV_q.py", line 390, in <module>
    train(opt)
  File "C:/Users/shang/Desktop/STDN_LI/TVGnet/OSV_q.py", line 309, in train
    pdf1 = miu2 * (lap * fftn(f - output_net - b2)) - karma * (conjoDx * fftn(q1) + conjoDy * fftn(q2))
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Process finished with exit code 1

回到之前的问题,决定统一pdf1中的几个参数tensor类型

单独把lap改为tensor类型

lap = torch.as_tensor(lap)

又把定义的 fft2 函数内部都改为torch

def fftn(t):
    return torch.fft.fftn(t)


def ifftn(t):
    return torch.fft.ifftn(t)
  File "C:/Users/shang/Desktop/STDN_LI/TVGnet/OSV_q.py", line 310, in train
    pdf1 = miu2 * (lap * fftn(f - output_net - b2)) - karma * (conjoDx * fftn(q1) + conjoDy * fftn(q2))
  File "C:/Users/shang/Desktop/STDN_LI/TVGnet/OSV_q.py", line 130, in fftn
    return torch.fft.fftn(t)
AttributeError: 'builtin_function_or_method' object has no attribute 'fftn'

又改回去了,fft还是numpy形式,所以,我的出发点错了?

应该找到底那个用的cpu,害,我忘了加这个了

lap = torch.as_tensor(lap).to(device)

然后,还是没有统一cudad的问题,继续改

Traceback (most recent call last):
  File "C:/Users/shang/Desktop/STDN_LI/TVGnet/OSV_q.py", line 389, in <module>
    train(opt)
  File "C:/Users/shang/Desktop/STDN_LI/TVGnet/OSV_q.py", line 312, in train
    z = ifft2(pdf1/demo1)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Process finished with exit code 1

加了这个,还不对,报错一样

 demo1 = torch.as_tensor(demo1).to(device)

 还是要试试把 fft 改成torch的

AttributeError: 'builtin_function_or_method' object has no attribute 'fft2'

然后:使用之前必须先 import torch.fft  新错误

  File "C:/Users/shang/Desktop/STDN_LI/TVGnet/OSV_q.py", line 311, in train
    aa1 = fft2(f - output_net - b2).to(device)
  File "C:/Users/shang/Desktop/STDN_LI/TVGnet/OSV_q.py", line 131, in fft2
    return torch.fft.fft2(t)
AttributeError: module 'torch.fft' has no attribute 'fft2'

Process finished with exit code 1

 好像是

def fft2(t):
    return torch.fft.fft2(t, dim=(-2, -1))


def ifft2(t):
    return torch.fft.ifft2(t, dim=(-2, -1))

torch版本问题?

def fft2(t):
    return torch.fft2(t, dim=(-2, -1))


def ifft2(t):
    return torch.ifft2(t, dim=(-2, -1))

 也不行也不行,好气啊啊啊啊啊啊! 放弃改torch形式了

一直都是这一部分的问题,到底咋回事呀。 反正不是这行错就是另一行,一直都在 pdf1 里面

        #  solve z problem
        lap = -(conjoDx * otfDx + conjoDy * otfDy)
        lap = torch.as_tensor(lap).to(device)
        aa1 = fft2(f - output_net - b2)
        aa2 = conjoDx * fft2(q1) + conjoDy * fft2(q2)
        aa1 = torch.as_tensor(aa1).to(device)
        aa2 = torch.as_tensor(aa2).to(device)

        pdf1 = miu2 * (lap * aa1) - karma * aa2
        pdf1 = torch.as_tensor(pdf1).to(device)
        demo1 = miu2 * lap ** 2 + karma * lap + epsilong
        demo1 = torch.as_tensor(demo1).to(device)
        z = ifft2(pdf1/demo1)
        z = z.real
   #  solve z problem
        lap = -(conjoDx * otfDx + conjoDy * otfDy)
        lap = torch.as_tensor(lap).to(device)
        aa1 = fft2(f - output_net - b2)
        aa2 = conjoDx * fft2(q1) + conjoDy * fft2(q2)
        aa1 = torch.as_tensor(aa1).to(device)
        aa2 = torch.as_tensor(aa2).to(device)

        pdf1 = miu2 * (lap * aa1) - karma * aa2
        demo1 = miu2 * lap ** 2 + karma * lap + epsilong
        demo1 = torch.as_tensor(demo1).to(device)
        z = ifft2(pdf1/demo1)
        z = z.real


Traceback (most recent call last):
  File "C:/Users/shang/Desktop/STDN_LI/TVGnet/OSV_q.py", line 396, in <module>
    train(opt)
  File "C:/Users/shang/Desktop/STDN_LI/TVGnet/OSV_q.py", line 310, in train
    aa1 = fft2(f - output_net - b2)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Process finished with exit code 1

阿西吧,要疯了

http://t.csdn.cn/J17gX      http://t.csdn.cn/9uGHG      http://t.csdn.cn/ktEzj

用print()函数显示了报错行中几个变量的cuda性质,发现b2 是 dtype=torch.float64形式的,和另外两个参数不一样

====>> Sun Mar 13 21:26:46 2022   Pass time: 0:00:32.427510
tensor([[[[ 42008.1379, -17366.5196,  42060.2720,  ..., -17473.4500,
         .., -33175.0087,
         ........
          [ 47419.3458, -47191.5486,  47428.5796,  ..., -47141.4389,
            23791.7163,    159.4638]]]], dtype=torch.float64)

 f 的

tensor([[[[ 96, 106, 119,  ...,  76,  92, 105],
         ........
          [131, 120, 125,  ..., 156, 142, 160]]]], device='cuda:0')

 output_net 的

tensor([[[[  5575.7866,  -2183.3752,   7809.6230,  ...,  -1697.3960,
            .......
             4803.1553,  -3239.5381]]]], device='cuda:0',
       grad_fn=<AddBackward0>)

 这个时候,我的b2的定义如下

    b1 = torch.zeros(f.shape)
    b1 = b1.to(device)
    b2 = torch.zeros(f.shape)
    b2 = b2.to(device)

明天再研究吧,太累了,今天至少找到问题了

http://t.csdn.cn/fMFRz  改成这种形式,不行  

b2 = b2.to(device=torch.device('cuda' if torch.cuda.is_available() else 'cpu'))

 http://t.csdn.cn/uXHFH  这样? 也不行

   b2 = torch.zeros(f.shape)
   b2 = b2.cuda(0)

本来如果不转cuda的时候b2是这样的

tensor([[[[0., 0., 0.,  ..., 0., 0., 0.],
          ......
        [0., 0., 0.,  ..., 0., 0., 0.]]]])

明天再研究吧

http://t.csdn.cn/jBv4x

 当把 f 的 f = f.to(device) 这一行注释掉,就发现b2在cuda上了,当然, f 不在了, 变成这样了

tensor([[[[ 96, 106, 119,  ...,  76,  92, 105],
        ........
        [131, 120, 125,  ..., 156, 142, 160]]]])

好像是找到问题了,于是把f加了变量,也不对

 f1 = img_in.clone()
    f1 = f1.permute(2, 0, 1)
    f1 = f1.unsqueeze(0).type(torch.LongTensor)
    f = f1.to(device)

最后,把b2的定义换了位置,放在需要引用b2的前面,为了简便,也改了定义方式

q1 = torch.zeros([1, 3, 320, 320])
q1 = q1.to(device)
b1 = b2 = q2 = q1

正在运行啦,好久了,不知道能不能出来结果,希望不要报错。

这个错误解决啦,开心

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值