把Pytorch当做Numpy用
PyTorch的官方介绍是一个拥有强力GPU加速的张量和动态构建网络的库,其主要构建是张量,所以可以把PyTorch当做Numpy来用,Pytorch的很多操作好比Numpy都是类似的,但是其能够在GPU上运行,所以有着比Numpy快很多倍的速度。
import torch
import numpy as np
1
2
# 创建一个numpy ndarray
numpy_tensor = np.random.randn(10, 20)
1
2
# 我们可以使用两种方式将numpy的ndarray转换到tensor上
pytorch_tensor1 = torch.Tensor(numpy_tensor)
pytorch_tensor2 = torch.from_numpy(numpy_tensor)
1
2
3
pytorch_tensor
1
Columns 0 to 9
-2.0670 0.4574 0.7862 0.8732 -0.9464 -0.0206 -1.9874 -0.1891 -1.3577 1.1978
0.2209 -0.4673 -0.8982 1.2152 -0.0394 0.8599 -0.7466 0.4687 -0.4374 -0.4232
-0.0314 0.6869 -0.1599 -0.1459 0.7254 1.3207 -0.5690 1.3090 0.2951 -1.4267
0.4429 -0.7772 -1.5120 2.1463 -0.7037 -1.3325 1.2233 0.3961 1.9587 0.4194
0.8311 -0.6408 0.0540 1.6393 -1.5806 -0.1183 -1.4754 -0.0811 2.3746 0.0032
-0.4014 -0.8829 1.5024 -0.4879 0.6437 -0.3370 0.0045 0.1528 -1.5533 1.6763
1.4980 1.2095 0.9052 -0.5167 -1.9845 -3.0615 0.7939 1.0424 0.4497 -1.2904
0.6492 -2.0714 0.6908 1.7376 -0.9549 -0.9715 -0.6485 1.1825 -0.5381 0.3943
-0.1166 1.2893 0.5930 -1.1232 -1.0911 0.7919 1.0056 -0.3783 0.5331 1.6321
-0.6201 -1.0694 -0.5818 1.4369 -2.8833 -2.7284 0.2999 0.3644 0.3053 -0.4270
Columns 10 to 19
-0.4787 -1.8597 -0.9414 0.7144 0.7075 -0.1787 0.2101 -1.3990 0.9888 -0.9092
-0.4320 -1.2586 -2.5937 -0.0585 0.6996 -1.7012 -1.6244 -2.3882 -0.9814 0.0937
-0.0107 -0.0873 0.0138 -2.4302 0.0393 0.6906 -1.7830 -1.6335 -2.7890 -0.9228
-0.8166 0.3297 0.3067 0.2694 -0.5977 -0.0958 -0.1266 1.0866 1.2712 -1.1265
-1.4260 -1.1283 -0.2183 2.0082 0.4752 1.1883 0.5993 0.5784 -1.0918 1.2907
0.0668 1.3361 -1.5621 -0.9930 0.0123 0.4356 -1.1068 1.4570 0.3982 -0.5652
-0.4183 -0.2524 0.9697 1.9701 -1.7895 -0.3444 -0.6599 -0.8356 1.6099 0.9891
-0.4030 -0.1860 -1.5744 0.3837 0.7988 0.3400 1.4014 -0.4244 0.8060 -1.7322
-0.4031 0.6796 0.1858 1.3451 -0.1065 -0.2587 -0.6197 -0.4825 1.6076 -0.1510
0.4240 -1.3695 0.2228 0.2487 -0.0591 -0.9889 -0.8329 0.6485 -0.0455 -0.3615
[torch.FloatTensor of size 10x20]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
pytorch_tensor2
1
Columns 0 to 9
-2.0670 0.4574 0.7862 0.8732 -0.9464 -0.0206 -1.9874 -0.1891 -1.3577 1.1978
0.2209 -0.4673 -0.8982 1.2152 -0.0394 0.8599 -0.7466 0.4687 -0.4374 -0.4232
-0.0314 0.6869 -0.1599 -0.1459 0.7254 1.3207 -0.5690 1.3090 0.2951 -1.4267
0.4429 -0.7772 -1.5120 2.1463 -0.7037 -1.3325 1.2233 0.3961 1.9587 0.4194
0.8311 -0.6408 0.0540 1.6393 -1.5806 -0.1183 -1.4754 -0.0811 2.3746 0.0032
-0.4014 -0.8829 1.5024 -0.4879 0.6437 -0.3370 0.0045 0.1528 -1.5533 1.6763
1.4980 1.2095 0.9052 -0.5167 -1.9845 -3.0615 0.7939 1.0424 0.4497 -1.2904
0.6492 -2.0714 0.6908 1.7376 -0.9549 -0.9715 -0.6485 1.1825 -0.5381 0.3943
-0.1166 1.2893 0.5930 -1.1232 -1.0911 0.7919 1.0056 -0.3783 0.5331 1.6321
-0.6201 -1.0694 -0.5818 1.4369 -2.8833 -2.7284 0.2999 0.3644 0.3053 -0.4270
Columns 10 to 19
-0.4787 -1.8597 -0.9414 0.7144 0.7075 -0.1787 0.2101 -1.3990 0.9888 -0.9092
-0.4320 -1.2586 -2.5937 -0.0585 0.6996 -1.7012 -1.6244 -2.3882 -0.9814 0.0937
-0.0107 -0.0873 0.0138 -2.4302 0.0393 0.6906 -1.7830 -1.6335 -2.7890 -0.9228
-0.8166 0.3297 0.3067 0.2694 -0.5977 -0.0958 -0.1266 1.0866 1.2712 -1.1265
-1.4260 -1.1283 -0.2183 2.0082 0.4752 1.1883 0.5993 0.5784 -1.0918 1.2907
0.0668 1.3361 -1.5621 -0.9930 0.0123 0.4356 -1.1068 1.4570 0.3982 -0.5652
-0.4183 -0.2524 0.9697 1.9701 -1.7895 -0.3444 -0.6599 -0.8356 1.6099 0.9891
-0.4030 -0.1860 -1.5744 0.3837 0.7988 0.3400 1.4014 -0.4244 0.8060 -1.7322
-0.4031 0.6796 0.1858 1.3451 -0.1065 -0.2587 -0.6197 -0.4825 1.6076 -0.1510
0.4240 -1.3695 0.2228 0.2487 -0.0591 -0.9889 -0.8329 0.6485 -0.0455 -0.3615
[torch.DoubleTensor of size 10x20]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
使用以上两种方法进行转换的时候,会直接将Numpy ndarray的数据类型转换为对应的Pytorch Tensor数据类型
同时我们也可以使用下面的方法将pytorch tensor转换为numpy ndarray
# 如果pytorch tensor在cpu上
numpy_array = pytorch_tensor1.numpy()
1
2
numpy_array
1
array([[-2.0670078 , 0.4574322 , 0.78623015, 0.87319034, -0.9463573 ,
-0.02064213, -1.9873619 , -0.18911053, -1.3577191 , 1.1977602 ,
-0.47868684, -1.8597033 , -0.94142354, 0.7144483 , 0.70752174,
-0.17867836, 0.21005797, -1.3989754 , 0.9888225 , -0.90917087],
[ 0.22085391, -0.46731803, -0.8981944 , 1.2151526 , -0.03942193,
0.85994005, -0.7466001 , 0.46871442, -0.43739757, -0.42324024,
-0.431979 , -1.2586004 , -2.5936642 , -0.05846936, 0.6995592 ,
-1.701165 , -1.6244463 , -2.3881698 , -0.98135054, 0.0937306 ],
[-0.03139093, 0.6868822 , -0.1599226 , -0.14594954, 0.7253733 ,
1.3207114 , -0.5690447 , 1.3090258 , 0.29505223, -1.4266868 ,
-0.01069067, -0.08727196, 0.01381475, -2.4301734 , 0.0393045 ,
0.6906265 , -1.7830491 , -1.6335276 , -2.789033 , -0.9228238 ],
[ 0.44289353, -0.77721107, -1.5120319 , 2.146307 , -0.70367444,
-1.3324662 , 1.2233036 , 0.39612436, 1.9586719 , 0.41938668,
-0.8165528 , 0.32971895, 0.30671978, 0.2694077 , -0.5977015 ,
-0.09583385, -0.12655513, 1.0865594 , 1.271182 , -1.1264527 ],
[ 0.83106196, -0.6408449 , 0.053988 , 1.6393039 , -1.5806218 ,
-0.11833967, -1.4754046 , -0.08112453, 2.3746264 , 0.00317584,
-1.4259746 , -1.12826 , -0.21832553, 2.0081503 , 0.47522262,
1.1882725 , 0.59925425, 0.57835764, -1.0917909 , 1.2907238 ],
[-0.40139404, -0.88290423, 1.5024137 , -0.48787385, 0.6436931 ,
-0.33698204, 0.00452452, 0.15282498, -1.5533499 , 1.6762884 ,
0.0668426 , 1.3360746 , -1.5621065 , -0.9930457 , 0.01225617,
0.43561974, -1.1067743 , 1.4569649 , 0.3981682 , -0.56515497],
[ 1.4980279 , 1.2095312 , 0.9051867 , -0.5166721 , -1.9844626 ,
-3.0615375 , 0.7939452 , 1.0423946 , 0.44967005, -1.2904087 ,
-0.41833445, -0.252407 , 0.9697007 , 1.9700882 , -1.7894937 ,
-0.34444782, -0.65994084, -0.835571 , 1.609947 , 0.98911875],
[ 0.6492442 , -2.0714116 , 0.69075614, 1.7375972 , -0.95488685,
-0.9715489 , -0.6485263 , 1.1824633 , -0.53809124, 0.39428484,
-0.40297785, -0.18602072, -1.5743892 , 0.3836741 , 0.79877853,
0.3400372 , 1.4014072 , -0.4243606 , 0.80603266, -1.7322252 ],
[-0.11660352, 1.2892654 , 0.59303194, -1.1232399 , -1.0910732 ,
0.79188424, 1.005646 , -0.37825948, 0.53310025, 1.6321343 ,
-0.40310478, 0.679564 , 0.18582046, 1.3450966 , -0.10653992,
-0.2587161 , -0.6197169 , -0.48248494, 1.6075559 , -0.15101816],
[-0.62011486, -1.0694433 , -0.58177733, 1.4369277 , -2.8832576 ,
-2.728376 , 0.2999117 , 0.36444157, 0.3053372 , -0.42701676,
0.4239991 , -1.3695202 , 0.22281511, 0.2486717 , -0.05909026,
-0.988939 , -0.8329022 , 0.6485032 , -0.04548256, -0.36154068]],
dtype=float32)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
numpy_array = pytorch_tensor2.cpu().numpy()
1
numpy_array
1
array([[-2.06700767, 0.45743223, 0.78623012, 0.87319037, -0.94635732,
-0.02064213, -1.98736194, -0.18911053, -1.35771901, 1.19776026,
-0.47868683, -1.85970334, -0.94142355, 0.71444827, 0.70752174,
-0.17867837, 0.21005797, -1.3989754 , 0.9888225 , -0.90917085],
[ 0.22085391, -0.46731804, -0.8981944 , 1.21515262, -0.03942193,
0.85994006, -0.74660007, 0.46871441, -0.43739757, -0.42324023,
-0.43197901, -1.25860032, -2.59366406, -0.05846936, 0.69955919,
-1.70116499, -1.6244463 , -2.38816985, -0.98135054, 0.0937306 ],
[-0.03139093, 0.68688218, -0.1599226 , -0.14594954, 0.72537333,
1.3207114 , -0.5690447 , 1.30902571, 0.29505224, -1.42668673,
-0.01069067, -0.08727196, 0.01381475, -2.43017332, 0.0393045 ,
0.69062648, -1.78304914, -1.63352769, -2.78903287, -0.92282379],
[ 0.44289352, -0.77721105, -1.51203186, 2.14630688, -0.70367446,
-1.33246623, 1.22330352, 0.39612436, 1.95867193, 0.41938668,
-0.81655279, 0.32971895, 0.30671979, 0.26940768, -0.59770149,
-0.09583385, -0.12655513, 1.08655946, 1.27118197, -1.12645271],
[ 0.83106198, -0.6408449 , 0.053988 , 1.63930398, -1.58062182,
-0.11833966, -1.47540457, -0.08112453, 2.3746264 , 0.00317584,
-1.42597458, -1.12826006, -0.21832552, 2.00815038, 0.4752226 ,
1.18827242, 0.59925425, 0.57835764, -1.0917909 , 1.29072378],
[-0.40139404, -0.88290424, 1.50241377, -0.48787386, 0.64369309,
-0.33698205, 0.00452452, 0.15282498, -1.5533499 , 1.67628842,
0.0668426 , 1.33607464, -1.56210645, -0.99304571, 0.01225617,
0.43561974, -1.1067743 , 1.4569648 , 0.3981682 , -0.565155 ],
[ 1.49802794, 1.20953121, 0.9051867 , -0.51667205, -1.98446267,
-3.06153742, 0.79394517, 1.04239462, 0.44967005, -1.29040868,
-0.41833445, -0.25240701, 0.9697007 , 1.97008822, -1.78949365,
-0.34444782, -0.65994084, -0.83557099, 1.60994695, 0.98911874],
[ 0.64924419, -2.07141151, 0.69075615, 1.73759726, -0.95488685,
-0.9715489 , -0.64852633, 1.18246331, -0.53809125, 0.39428485,
-0.40297784, -0.18602072, -1.57438925, 0.3836741 , 0.79877854,
0.3400372 , 1.40140729, -0.42436059, 0.80603268, -1.73222518],
[-0.11660351, 1.28926543, 0.59303196, -1.12323984, -1.09107319,
0.79188423, 1.00564602, -0.37825947, 0.53310023, 1.63213435,
-0.40310477, 0.67956399, 0.18582045, 1.34509654, -0.10653992,
-0.25871611, -0.61971689, -0.48248494, 1.60755585, -0.15101816],
[-0.62011489, -1.06944338, -0.58177736, 1.43692763, -2.88325766,
-2.72837587, 0.29991171, 0.36444159, 0.30533718, -0.42701677,
0.42399911, -1.36952017, 0.22281511, 0.24867169, -0.05909026,
-0.98893901, -0.83290217, 0.64850318, -0.04548256, -0.36154068]])
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
需要注意GPU上的Tensor不能直接转换为Numpy ndarray,需要使用.cpu()先将GPU上的Tensor转到CPU上
PyTorch Tensor 使用GPU加速
可以使用下面两种方法将Tensor放到GPU上
# 第一种方式是定义cuda数据类型
dtype = torch.cuda.FloatTensor
gpu_tensor = torch.randn(10,20).type(dtype)
# 第二种方式更简单,推荐使用
#gpu_tensor = torch.randn(10,20).cuda(0) # 将tensor放到第一个GPU上
#gpu_tensor = torch.randn(10,20).cuda(1) # 将tensor放到第二个GPU上
1
2
3
4
5
6
7
使用第一种方式将tensor放到GPU上的时候会将数据类型转换成定义的类型,而是用第二种方式能够直接将tensor放到GPU上,类型跟之前保持一致
推荐在定义tensor的时候就明确数据类型,然后直接使用第二种方法将tensor放到GPU上
本人的GPU是GTX 960M,第二种方式不支持。
# 将tensor放回CPU的操作非常简单
cpu_tensor = gpu_tensor.cpu()
1
2
我们也能访问到Tensor的一些属性
print(pytorch_tensor1.shape)
1
torch.Size([10, 20])
1
print(pytorch_tensor2.size())
1
torch.Size([10, 20])
1
# 得到tensor的数据类型
print(pytorch_tensor1.type())
1
2
torch.FloatTensor
1
# 得到tensor的维度
print(pytorch_tensor1.dim())
1
2
2
1
# 得到tensor的所有元素个数
print(pytorch_tensor1.numel())
1
2
200
1
小练习
查阅官网文档了解tensor的数据类型,创建一个float64,大小3x2,随机初始化的tensor,将其转换为numpy 的ndarray,输出其数据类型
# 答案
# 使用的是GPU版本
x = torch.randn(3,2).type(torch.cuda.DoubleTensor)
x_array = x.cpu().numpy()
print(x_array.dtype)
1
2
3
4
5
float64
1
Tensor的操作
tensor操作中的api和Numpy非常相似,如果你熟悉Numpy中的操作,那么tensor基本是一致的,下面我们来列举其中一些操作
x = torch.ones(2,2)
print(x)
1
2
1 1
1 1
[torch.FloatTensor of size 2x2]
1
2
3
x.type()
1
'torch.FloatTensor'
1
# 将其转换为整型
x = x.long()
print(x)
1
2
3
1 1
1 1
[torch.LongTensor of size 2x2]
1
2
3
# 再将其转回float
x = x.float()
print(x)
1
2
3
1 1
1 1
[torch.FloatTensor of size 2x2]
1
2
3
x = torch.randn(4,3)
print(x)
1
2
-0.4872 0.4592 -0.5299
1.2874 -1.8578 -0.6912
-0.3919 0.0527 0.1029
0.4696 0.2418 -0.1918
[torch.FloatTensor of size 4x3]
1
2
3
4
5
# 沿着行取最大值
max_value, max_idx = torch.max(x, dim=1)
1
2
max_value
1
0.4592
1.2874
0.1029
0.4696
[torch.FloatTensor of size 4]
1
2
3
4
5
max_idx
1
1
0
2
0
[torch.LongTensor of size 4]
1
2
3
4
5
# 沿着行对x求和
sum_x = torch.sum(x, dim=1)
print(sum_x)
1
2
3
-0.5580
-1.2617
-0.2362
0.5196
[torch.FloatTensor of size 4]
1
2
3
4
5
# 增加维度或者减少维度
print(x.shape)
x = x.unsqueeze(0)
print(x.shape)
1
2
3
4
torch.Size([4, 3])
torch.Size([1, 4, 3])
1
2
x = x.unsqueeze(1) # 在第二维增加
print(x.shape)
1
2
torch.Size([1, 1, 4, 3])
1
x = x.squeeze(0) # 减少第一维
print(x.shape)
1
2
torch.Size([1, 4, 3])
1
x = x.squeeze()
print(x.shape)
1
2
torch.Size([4, 3])
1
x = torch.randn(3,4,5)
print(x.shape)
# 使用permute和transpose进行维度交换
x = x.permute(1,0,2)
print(x.shape)
# transpose交换tensor中的两个维度
x = x.transpose(0,2)
print(x.shape)
1
2
3
4
5
6
7
8
9
10
torch.Size([3, 4, 5])
torch.Size([4, 3, 5])
torch.Size([5, 3, 4])
1
2
3
# 使用view对tensor进行reshape
x = torch.randn(3,4,5)
x = x.view(-1, 5)
# -1 表示任意的大小,5表示第二维变成5
print(x.shape)
# 重新reshape成(3, 20)的大小
x = x.view(3,20)
print(x.shape)
1
2
3
4
5
6
7
8
9
10
torch.Size([12, 5])
torch.Size([3, 20])
1
2
x = torch.randn(3,4)
y = torch.randn(3,4)
# 两个tensor求和
z = x + y
# z = torch.add(x,y)
1
2
3
4
5
6
另外,pytorch中大多数的操作都支持inplace操作,也就是可以直接对tensor进行操作而不需要另外开辟内存空间,方式非常简单,一般都是在操作的符号后面加_,比如
x = torch.ones(3, 3)
print(x.shape)
# unsqueeze 进行inplace
x.unsqueeze_(0)
print(x.shape)
# transpose 进行inplace
x.transpose_(1,0)
print(x.shape)
1
2
3
4
5
6
7
8
9
10
torch.Size([3, 3])
torch.Size([1, 3, 3])
torch.Size([3, 1, 3])
1
2
3
x = torch.ones(3, 3)
y = torch.ones(3, 3)
print(x)
x.add_(y)
print(x)
1
2
3
4
5
6
1 1 1
1 1 1
1 1 1
[torch.FloatTensor of size 3x3]
2 2 2
2 2 2
2 2 2
[torch.FloatTensor of size 3x3]
1
2
3
4
5
6
7
8
9
10
小练习
访问官方文档了解tensor更多的api,实现下面的要求
创建一个float32,4x4的全为1 的矩阵,将矩阵正中间2x2的矩阵,全部修改成2
参考输出
#
x = torch.ones(4, 4).float()
x[1:3, 1:3] = 2
print(x)
1
2
3
4
1 1 1 1
1 2 2 1
1 2 2 1
1 1 1 1
[torch.FloatTensor of size 4x4]
1
2
3
4
5
Variable
tensor是PyTorch中的完美组件,但是构建神经网络还远远不够,我们需要能够构建计算图的tensor,这就是Variable。Variable是对tensor的封装,操作和tensor是一样的,但是每个Variable都有三个属性,Variable中的tensor本身.data,对应tensor的梯度.grad以及这个Variable是通过说明方式得到的.grad_fn
# 通过下面这种方式导入Variable
from torch.autograd import Variable
1
2
x_tensor = torch.randn(10, 5)
y_tensor = torch.randn(10, 5)
# 将tensor变成Variable
x = Variable(x_tensor, requires_grad=True) # 默认Variable 是不需要求梯度的,所以用这个方式申明需要对其进行求梯度
y = Variable(y_tensor, requires_grad=True)
1
2
3
4
5
6
z = torch.sum(x+y)
1
print(z.data)
print(z.grad_fn)
1
2
14.4502
[torch.FloatTensor of size 1]
<SumBackward0 object at 0x7fb3787ed518>
1
2
3
4
上面我们打出了z中的tensor数值,同时通过grad_fn知道了其是通过Sum这种方式得到的
# 求x和y的梯度
z.backward()
print(x.grad)
print(y.grad)
1
2
3
4
Variable containing:
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
[torch.FloatTensor of size 10x5]
Variable containing:
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
[torch.FloatTensor of size 10x5]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
通过.grad我们得到了x和y的梯度,这里我们使用了Pytorch提供的自动求导机制,非常的方便,下一小节会自动将自动求导
小练习
尝试构建一个函数y = x^2 ,然后求x=2时的导数
输出4
y = x^2 的图像如下:
import matplotlib.pyplot as plt
x = np.arange(-3, 3.01, 0.1)
y = x**2
plt.plot(x,y)
plt.plot(2,4,'ro')
plt.show()
1
2
3
4
5
6
png
x = Variable(torch.FloatTensor([2]), requires_grad=True)
y = x**2
y.backward()
print(x.grad)
1
2
3
4
Variable containing:
4
[torch.FloatTensor of size 1]
---------------------
作者:秦景坤
来源:CSDN
原文:https://blog.csdn.net/qjk19940101/article/details/79555653
版权声明:本文为博主原创文章,转载请附上博文链接!