1. Deep Learning with PyTorch: A 60 Minute Blitz
installation
conda install pytorch torchvision -c pytorch
1.1 WHAT IS PYTORCH?
1.1.1 Tensors
uninitialized matrix
An uninitialized matrix is declared, but does not contain definite known values before it is used. When an uninitialized matrix is created, whatever values were in the allocated memory at the time will appear as the initial values.
x = torch.empty(5, 3)
print(x)
x = torch.empty(4, 3)
print(x)
tensor([[1.8361e+25, 1.4603e-19, 6.4069e+02],
[2.7489e+20, 1.5444e+25, 1.6217e-19],
[7.0062e+22, 1.6795e+08, 4.7423e+30],
[4.7393e+30, 9.5461e-01, 4.4377e+27],
[1.7975e+19, 4.6894e+27, 7.9463e+08]])
tensor([[0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00],
[1.4013e-45, 0.0000e+00, 1.4013e-45],
[0.0000e+00, 1.4013e-45, 0.0000e+00]])
initialized matrix
x = torch.rand(5, 3)
print(x)
tensor([[0.6975, 0.5149, 0.2355],
[0.4268, 0.5718, 0.9926],
[0.1971, 0.9001, 0.8343],
[0.3220, 0.3697, 0.6189],
[0.7503, 0.0220, 0.4351]])
x = torch.zeros(5, 3, dtype=torch.long)
print(x)
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
x = torch.tensor([5.5, 3])
print(x)
tensor([5.5000, 3.0000])
# default: keep original dtype
x = x.new_ones(5, 3, dtype=torch.double)
print(x)
x = torch.rand_like(x, dtype=torch.float)
print(x)
print(x.size())
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float64)
tensor([[3.8165e-01, 9.2900e-01, 5.0699e-01],
[4.1205e-01, 1.6332e-01, 6.0987e-01],
[7.0150e-01, 7.5135e-01, 2.3191e-02],
[3.4211e-01, 5.0414e-04, 9.3968e-01],
[5.9820e-01, 5.1819e-01, 8.1101e-01]])
torch.Size([5, 3])
1.1.2 Operations
add
(1) 3 equal ways
y = torch.rand(5, 3)
print(y)
print(x+y)
tensor([[0.2133, 0.4834, 0.8666],
[0.8965, 0.0018, 0.8159],
[0.9363, 0.4279, 0.6789],
[0.6467, 0.1484, 0.0118],
[0.9581, 0.2033, 0.3418]])
tensor([[0.5949, 1.4124, 1.3736],
[1.3086, 0.1651, 1.4258],
[1.6378, 1.1793, 0.7021],
[0.9888, 0.1489, 0.9515],
[1.5563, 0.7214, 1.1528]])
print(torch.add(x, y))
tensor([[0.5949, 1.4124, 1.3736],
[1.3086, 0.1651, 1.4258],
[1.6378, 1.1793, 0.7021],
[0.9888, 0.1489, 0.9515],
[1.5563, 0.7214, 1.1528]])
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
tensor([[0.5949, 1.4124, 1.3736],
[1.3086, 0.1651, 1.4258],
[1.6378, 1.1793, 0.7021],
[0.9888, 0.1489, 0.9515],
[1.5563, 0.7214, 1.1528]])
(2) replace
y.add_(x)
print(y)
tensor([[0.5949, 1.4124, 1.3736],
[1.3086, 0.1651, 1.4258],
[1.6378, 1.1793, 0.7021],
[0.9888, 0.1489, 0.9515],
[1.5563, 0.7214, 1.1528]])
Any operation that mutates a tensor in-place is post-fixed with an _.
For example: x.copy_(y), x.t_(), will change x.
index
print(x[:, 1])
tensor([9.2900e-01, 1.6332e-01, 7.5135e-01, 5.0414e-04, 5.1819e-01])
resize
x = torch.rand(4, 4)
y = x.view(16)
z = x.view(-1, 8)
print(x.size(), y.size(), z.size())
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
1.1.3 NumPy Bridge
Converting a Torch Tensor to a NumPy array and vice versa is a breeze.
The Torch Tensor and NumPy array will share their underlying memory locations (if the Torch Tensor is on CPU), and changing one will change the other.
Torch Tensor to NumPy Array
a = torch.ones(5)
b = a.numpy()
print(a)
print(b)
tensor([1., 1., 1., 1., 1.])
[1. 1. 1. 1. 1.]
a.add_(1)
print(a)
print(b)
tensor([2., 2., 2., 2., 2.])
[2. 2. 2. 2. 2.]
NumPy Array to Torch Tensor
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
[2. 2. 2. 2. 2.]
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
参考
Deep Learning with PyTorch: A 60 Minute Blitz - WHAT IS PYTORCH?