不同卷积权重的数据类型(float32,float64)卷积运算的时间
pytorch中,张量的数据类型大概可以分为以下8种,不同的数据类型在运算时的速度是不同的,支持的设备也是不同的。
图片来自:https://blog.csdn.net/xholes/article/details/81667211
测试环境:CPU
一:32位浮点型
import torch
import torch.nn as nn
import time
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv = nn.Conv2d(1, 15000000, 3, 1, 1, bias=False)
# self.conv.weight.dtype = torch.float64
self.conv.weight.data = torch.rand((15000000, 1, 3, 3), dtype=torch.float32)
# print(self.conv.bias.data.shape)
# self.conv.bias.data = torch.rand((3), dtype=torch.float64)
print('卷积权重的数据类型为:', self.conv.weight.dtype)
def forward(self, x):
return self.conv(x)
t = torch.rand((1, 1, 5, 5), dtype=torch.float32)
# print(t)
# print(t[0][0])
net = Net()
# net.cast("float16")
# print(net)
# print(net(t))
s = time.time()
net(t)
e = time.time()
print('卷积操作的运行时间为:', e - s)
# 卷积权重的数据类型为: torch.float32
# 卷积操作的运行时间为: 1.6690454483032227
二:64位浮点型
import torch
import torch.nn as nn
import time
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv = nn.Conv2d(1, 15000000, 3, 1, 1, bias=False)
# self.conv.weight.dtype = torch.float64
self.conv.weight.data = torch.rand((15000000, 1, 3, 3), dtype=torch.float64)
# print(self.conv.bias.data.shape)
# self.conv.bias.data = torch.rand((3), dtype=torch.float64)
print('卷积权重的数据类型为:', self.conv.weight.dtype)
def forward(self, x):
return self.conv(x)
t = torch.rand((1, 1, 5, 5), dtype=torch.float64)
# print(t)
# print(t[0][0])
net = Net()
# net.cast("float16")
# print(net)
# print(net(t))
s = time.time()
net(t)
e = time.time()
print('卷积操作的运行时间为:', e - s)
# 卷积权重的数据类型为: torch.float64
# 卷积操作的运行时间为: 3.7898099422454834
作者Douzaikongcheng
本人博客
http://blog.csdn.net/Douzaikongcheng
qq 973912428
转载说明:请注明来源,附带本人博客链接,谢谢配合。