pytorch---Introduction to PyTorch Tensors

学习链接

import  torch
import math
x=torch.empty(3,4)
print(type(x))
print(x)

out

<class 'torch.Tensor'>
tensor([[1.0194e-38, 1.0469e-38, 1.0010e-38, 6.4285e-39],
        [9.9184e-39, 1.0561e-38, 8.9082e-39, 8.9082e-39],
        [1.0194e-38, 9.1837e-39, 4.6837e-39, 9.2755e-39]])

1.torch.Tensor别名为torch.FloatTensor。
2.torch.empty()调用为张量分配内存,但没有用任何值初始化它----因此您看到的是分配时内存中的内容。

Random Tensors and Seeding

torch.manual_seed(1729)
random1=torch.rand(2,3)
print(random1)

random2=torch.rand(2,3)
print(random2)

torch.manual_seed(1729)
random3=torch.rand(2,3)
print(random3)
random4=torch.rand(2,3)
print(random4)
tensor([[0.3126, 0.3791, 0.3087],
        [0.0736, 0.4216, 0.0691]])
tensor([[0.2332, 0.4047, 0.2162],
        [0.9927, 0.4128, 0.5938]])
tensor([[0.3126, 0.3791, 0.3087],
        [0.0736, 0.4216, 0.0691]])
tensor([[0.2332, 0.4047, 0.2162],
        [0.9927, 0.4128, 0.5938]])

1.manual_seed 设置CPU生成随机数的种子,方便下次复现实验结果。
2.需注意,必须使用相同的生成随机数函数才能保证每次执行 torch.manual_seed()语句后生成相同随机数,否则无效。

Tensor Shapes

x=torch.empty(2,2,3)
print(x.shape)
print(x)

empty_like_x=torch.empty_like(x)
print(empty_like_x.shape)
print(empty_like_x)

zeros_like_x=torch.zeros_like(x)
print(zeros_like_x.shape)
print(empty_like_x)

ones_like_x=torch.ones_like(x)
print(ones_like_x.shape)
print(ones_like_x)

rand_like_x=torch.rand_like(x)
print(rand_like_x.shape)
print(rand_like_x)

output

torch.Size([2, 2, 3])
tensor([[[-0.0000e+00,  1.6082e+00,  2.0000e+00],
         [ 1.7023e+00, -0.0000e+00,  1.5912e+00]],

        [[ 3.6893e+19,  1.8732e+00, -2.0000e+00],
         [ 1.7064e+00,  1.0842e-19,  1.7735e+00]]])
torch.Size([2, 2, 3])
tensor([[[ 1.6082e+00,  2.0000e+00,  1.7023e+00],
         [ 1.5912e+00,  3.6893e+19,  1.8732e+00]],

        [[-2.0000e+00,  1.7064e+00,  1.0842e-19],
         [ 1.7735e+00,  1.4013e-45,  0.0000e+00]]])
torch.Size([2, 2, 3])
tensor([[[ 1.6082e+00,  2.0000e+00,  1.7023e+00],
         [ 1.5912e+00,  3.6893e+19,  1.8732e+00]],

        [[-2.0000e+00,  1.7064e+00,  1.0842e-19],
         [ 1.7735e+00,  1.4013e-45,  0.0000e+00]]])
torch.Size([2, 2, 3])
tensor([[[1., 1., 1.],
         [1., 1., 1.]],

        [[1., 1., 1.],
         [1., 1., 1.]]])
torch.Size([2, 2, 3])
tensor([[[0.6128, 0.1519, 0.0453],
         [0.5035, 0.9978, 0.3884]],

        [[0.6929, 0.1703, 0.1384],
         [0.4759, 0.7481, 0.0361]]])

最后一种创建张量的方式

some_constants=torch.tensor([[3.663,3.336],[1.61803,0.0072897]])
print(some_constants)

some_integers=torch.tensor((2,3,5,7,11,13,17,19))
print(some_integers)
more_integers=torch.tensor(((2,4,6),[3,6,9]))
print(more_integers)

output

tensor([[3.6630, 3.3360],
        [1.6180, 0.0073]])
tensor([ 2,  3,  5,  7, 11, 13, 17, 19])
tensor([[2, 4, 6],
        [3, 6, 9]])

Tensor Data Types

a=torch.ones((2,3),dtype=torch.int16)
print(a)
print(type(a))
b=torch.rand((2,3),dtype=torch.float64)*64
print(b)
print(type(b))

c=b.to(torch.int32)
print(c)

output

tensor([[1, 1, 1],
        [1, 1, 1]], dtype=torch.int16)
<class 'torch.Tensor'>
tensor([[ 3.1859,  4.5272, 18.6766],
        [35.9701, 35.8665, 37.3415]], dtype=torch.float64)
<class 'torch.Tensor'>
tensor([[ 3,  4, 18],
        [35, 35, 37]], dtype=torch.int32)

1.我们可以使用可选择的参数,在上面我们使用torch.int16,输出的数据变为1,而不是1.。
2.在打印a的同时打印出了特定的dtype,在默认类型参数的时候,不会打印。
3.另一种方式,使用.to()方法可以将创建的随机生成的float的tensor,转换成了32-bit integer类型的c。
4.可用的数据类型包括:

1.torch.bool
2.torch.int8
3.torch.uint8
4.torch.int16
5.torch.int32
6.torch.int64
7.torch.half
8.torch.float
9.torch.double
10.torch.bfloat

Math & Logic with PyTorch Tensors

ones=torch.zeros(2,2)+1
twos=torch.ones(2,2)*2
threes=(torch.ones(2,2)*7-1)/2
fours=twos**2
sqrt2s=twos**0.5

print(ones)
print(twos)
print(threes)
print(fours)
print(sqrt2s)

output

tensor([[1., 1.],
        [1., 1.]])
tensor([[2., 2.],
        [2., 2.]])
tensor([[3., 3.],
        [3., 3.]])
tensor([[4., 4.],
        [4., 4.]])
tensor([[1.4142, 1.4142],
        [1.4142, 1.4142]])

tensor可以与tensor做加减乘除平方操作

powers2=twos**torch.tensor([[1,2],[3,4]])
print(powers2)
fives=ones+fours
print(fives)
dozens=threes*fours
print(dozens)

output

tensor([[ 2.,  4.],
        [ 8., 16.]])
tensor([[5., 5.],
        [5., 5.]])
tensor([[12., 12.],
        [12., 12.]])

当不同维度的做数学操作,结果返回RuntimeError: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 1

a=torch.rand(2,3)
b=torch.rand(3,2)
print(a*b)

Tensor Broadcasting

与numpy的广播相类似

rand=torch.rand(2,4)
doubled=rand*(torch.ones(1,4)*2)
print(rand)
print(doubled)

output

tensor([[0.6146, 0.5999, 0.5013, 0.9397],
        [0.8656, 0.5207, 0.6865, 0.3614]])
tensor([[1.2291, 1.1998, 1.0026, 1.8793],
        [1.7312, 1.0413, 1.3730, 0.7228]])

广播的规则
每个维度必须相等,或者其中一个维度的大小必须为1,或者其中一个张量中不存在维数。

a=torch.ones(4,3,2)
b=a*torch.rand( 3,2)# 3rd & 2nd dims identical to a, dim 1 absent
print(b)
c=a*torch.rand( 3,1)# 3rd dim = 1, 2nd dim identical to a
print(c)
d=a*torch.rand( 1,2)# 3rd dim identical to a, 2nd dim = 1
print(d)

output

tensor([[[0.6493, 0.2633],
         [0.4762, 0.0548],
         [0.2024, 0.5731]],

        [[0.6493, 0.2633],
         [0.4762, 0.0548],
         [0.2024, 0.5731]],

        [[0.6493, 0.2633],
         [0.4762, 0.0548],
         [0.2024, 0.5731]],

        [[0.6493, 0.2633],
         [0.4762, 0.0548],
         [0.2024, 0.5731]]])
tensor([[[0.7191, 0.7191],
         [0.4067, 0.4067],
         [0.7301, 0.7301]],

        [[0.7191, 0.7191],
         [0.4067, 0.4067],
         [0.7301, 0.7301]],

        [[0.7191, 0.7191],
         [0.4067, 0.4067],
         [0.7301, 0.7301]],

        [[0.7191, 0.7191],
         [0.4067, 0.4067],
         [0.7301, 0.7301]]])
tensor([[[0.6276, 0.7357],
         [0.6276, 0.7357],
         [0.6276, 0.7357]],

        [[0.6276, 0.7357],
         [0.6276, 0.7357],
         [0.6276, 0.7357]],

        [[0.6276, 0.7357],
         [0.6276, 0.7357],
         [0.6276, 0.7357]],

        [[0.6276, 0.7357],
         [0.6276, 0.7357],
         [0.6276, 0.7357]]])

以下是一些失败的广播尝试:

a=torch.ones(4,3,2)
b=a*torch.rand(4,3)# dimensions must match last-to-first
c=a*torch.rand( 2 ,3)# both 3rd & 2nd dims different
d=a*torch.rand((0,))# can't broadcast with an empty tensor

More Math with Tensors

a=torch.rand(2,4)*2-1
print(a)
print('Common functions:')
print(torch.abs(a))#绝对值
print(torch.ceil(a))#输出元素的上限,大于或者等于每个元素的最小整数
print(torch.floor(a))#下限,小于等于每一个元素最小整数
print(torch.clamp(a,-0.5,0.5))#clamp固定在[min,max]的范围
#三角函数和反三角函数
angles=torch.tensor([0,math.pi/4,math.pi/2,3*math.pi/4])
sines=torch.sin(angles)
inverses=torch.asin(sines)
print('\nSine and arcsine:')
print(angles)
print(sines)
print(inverses)

#按位操作
print(":\n Bitwise XOR:")
b=torch.tensor([1,5,11])
c=torch.tensor([2,7,10])
print(torch.bitwise_xor(b,c))#按位异或

#比较
print("\n Broadcosted,element-wise equality comparsion:")
d=torch.tensor([[1.,2.],[3.,4.]])
e=torch.ones(1,2)
print(torch.eq(d,e))



#向量和线性函数操作
v1=torch.tensor([1.,0.,0.])
v2=torch.tensor([0.,1.,0.])
m1=torch.rand(2,2)
m2=torch.tensor([[3.,0.],[0.,3.]])

print('\nVector &Matrices')
print(torch.cross(v2,v1))#叉乘,外积
#数学意义该两个向量所围成的平行四边形的面积
print(m1)
m3=torch.matmul(m1,m2)#矩阵乘法
print(m3)
print(torch.svd(m3))#奇异值降维计算

output

tensor([[-0.9238, -0.5724,  0.0791, -0.2629],
        [-0.1986,  0.4439,  0.6434, -0.4776]])
Common functions:
tensor([[0.9238, 0.5724, 0.0791, 0.2629],
        [0.1986, 0.4439, 0.6434, 0.4776]])
tensor([[-0., -0., 1., -0.],
        [-0., 1., 1., -0.]])
tensor([[-1., -1.,  0., -1.],
        [-1.,  0.,  0., -1.]])
tensor([[-0.5000, -0.5000,  0.0791, -0.2629],
        [-0.1986,  0.4439,  0.5000, -0.4776]])

Sine and arcsine:
tensor([0.0000, 0.7854, 1.5708, 2.3562])
tensor([0.0000, 0.7071, 1.0000, 0.7071])
tensor([0.0000, 0.7854, 1.5708, 0.7854])
:
 Bitwise XOR:
tensor([3, 2, 1])

 Broadcosted,element-wise equality comparsion:
tensor([[ True, False],
        [False, False]])

Vector &Matrices
tensor([ 0.,  0., -1.])
tensor([[0.7375, 0.8328],
        [0.8444, 0.2941]])
tensor([[2.2125, 2.4985],
        [2.5332, 0.8822]])
torch.return_types.svd(
U=tensor([[-0.7889, -0.6145],
        [-0.6145,  0.7889]]),
S=tensor([4.1498, 1.0548]),
V=tensor([[-0.7957,  0.6056],
        [-0.6056, -0.7957]]))

Altering Tensors in Place(就地改变张量)

a = torch.tensor([0, math.pi / 4, math.pi / 2, 3 * math.pi / 4])
print('a:')
print(a)
print(torch.sin(a))   # this operation creates a new tensor in memory
print(a)              # a has not changed

b = torch.tensor([0, math.pi / 4, math.pi / 2, 3 * math.pi / 4])
print('\nb:')
print(b)
print(torch.sin_(b))  # note the underscore
print(b)              # b has changed

output

a:
tensor([0.0000, 0.7854, 1.5708, 2.3562])
tensor([0.0000, 0.7071, 1.0000, 0.7071])
tensor([0.0000, 0.7854, 1.5708, 2.3562])

b:
tensor([0.0000, 0.7854, 1.5708, 2.3562])
tensor([0.0000, 0.7071, 1.0000, 0.7071])
tensor([0.0000, 0.7071, 1.0000, 0.7071])

相似的操作

a=torch.ones(2,2)
b=torch.rand(2,2)
print('Befores')
print(a)
print(b)
print('\nAfter adding:')
print(a.add_(b))
print(a)
print(b)
print('\n After multiplying')
print(b.mul_(b))
print(b)
Befores
tensor([[1., 1.],
        [1., 1.]])
tensor([[0.3788, 0.4567],
        [0.0649, 0.6677]])

After adding:
tensor([[1.3788, 1.4567],
        [1.0649, 1.6677]])
tensor([[1.3788, 1.4567],
        [1.0649, 1.6677]])
tensor([[0.3788, 0.4567],
        [0.0649, 0.6677]])

 After multiplying
tensor([[0.1435, 0.2086],
        [0.0042, 0.4459]])
tensor([[0.1435, 0.2086],
        [0.0042, 0.4459]])

a.add_(b)会更改a的位置
不会发生内存变化的操作

a=torch.rand(2,2)
b=torch.rand(2,2)
c=torch.zeros(2,2)
old_id=id(c)

print(c)
d=torch.matmul(a,b,out=c)
print(c)

assert c  is   d # c 和d是相同的对象
assert id(c)==old_id#c和原来的内存是一样的

torch.rand(2, 2, out=c) # works for creation too!
print(c)                # c has changed again
assert id(c) == old_id  # still the same object!

output

tensor([[0., 0.],
        [0., 0.]])
tensor([[0.3653, 0.8699],
        [0.2364, 0.3604]])
tensor([[0.0776, 0.4004],
        [0.9877, 0.0352]])

Copying Tensors

a=torch.ones(2,2)
b=a
a[0][1]=561
print(b)

output

tensor([[  1., 561.],
        [  1.,   1.]])

在使用赋值操作,会将a当做b的标签,修改a,b也会改变
但是,如果您想要处理数据的单独副本,该怎么办呢?clone()方法。

tensor([[True, True],
        [True, True]])
tensor([[1., 1.],
        [1., 1.]])

output

tensor([[True, True],
        [True, True]])
tensor([[1., 1.],
        [1., 1.]])

(不明白,之后去看)
There is an important thing to be aware of when using clone(). If your source tensor has autograd, enabled then so will the clone. This will be covered more deeply in the video on autograd, but if you want the light version of the details, continue on.

In many cases, this will be what you want. For example, if your model has multiple computation paths in its forward() method, and both the original tensor and its clone contribute to the model’s output, then to enable model learning you want autograd turned on for both tensors. If your source tensor has autograd enabled (which it generally will if it’s a set of learning weights or derived from a computation involving the weights), then you’ll get the result you want.

On the other hand, if you’re doing a computation where neither the original tensor nor its clone need to track gradients, then as long as the source tensor has autograd turned off, you’re good to go.

There is a third case, though: Imagine you’re performing a computation in your model’s forward() function, where gradients are turned on for everything by default, but you want to pull out some values mid-stream to generate some metrics. In this case, you don’t want the cloned copy of your source tensor to track gradients - performance is improved with autograd’s history tracking turned off. For this, you can use the .detach() method on the source tensor:

a=torch.rand(2,2,requires_grad=True)
print(a)
b=a.clone()
print(b)
c=a.detach().clone()
print(c)
print(a)
tensor([[0.0905, 0.4485],
        [0.8740, 0.2526]], requires_grad=True)
tensor([[0.0905, 0.4485],
        [0.8740, 0.2526]], grad_fn=<CloneBackward0>)
tensor([[0.0905, 0.4485],
        [0.8740, 0.2526]])
tensor([[0.0905, 0.4485],
        [0.8740, 0.2526]], requires_grad=True)

What’s happening here?

We create a with requires_grad=True turned on. We haven’t covered this optional argument yet, but will during the unit on autograd.

When we print a, it informs us that the property requires_grad=True - this means that autograd and computation history tracking are turned on.

We clone a and label it b. When we print b, we can see that it’s tracking its computation history - it has inherited a’s autograd settings, and added to the computation history.

We clone a into c, but we call detach() first.

Printing c, we see no computation history, and no requires_grad=True.

The detach() method detaches the tensor from its computation history. It says, “do whatever comes next as if autograd was off.” It does this without changing a - you can see that when we print a again at the end, it retains its requires_grad=True property.

Moving to GPU

将数据放入gpu可以访问的内存单元中。moving the data to memory accessible by the GPU

if torch.cuda.is_available():
    gpu_rand=torch.rand(2,2,device='cuda')
    print(gpu_rand)
else:
    print('Sorry,CPU only')

output

We have a GPU
tensor([[0.3344, 0.2640],
        [0.2119, 0.0582]], device='cuda:0')
if torch.cuda.is_available():
    my_device=torch.device('cuda')
else:
    my_device=torch.device("cpu")
print('Device:{}'.format(my_device))

x=torch.rand(2,2,device=my_device)
print(x)

output

Device:cuda
tensor([[0.0024, 0.6778],
        [0.2441, 0.6812]], device='cuda:0')

torch.cuda.device_count() gpu的设备的数量。
可以使用device=‘cuda:0’, device='cuda:1’来使用特定的cpu的内存

可以使用to函数,将在cpu内存中创建的变量,可以变量放入GPU的设备中. 如下:

y=torch.rand(2,2)
y=y.to(my_device)

output

tensor([[0.6923, 0.7545],
        [0.7746, 0.2330]], device='cuda:0')

计算两个或者两个以上的数据,所有的tensor必须在相同的设备中

x = torch.rand(2, 2)
y = torch.rand(2, 2, device='gpu')
z = x + y  # exception will be thrown

RuntimeError: Expected one of cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, mtia, privateuseone device type at start of device string: gpu

Manipulating Tensor Shape(操纵张量的维度)

pytorch经常需要操作批处理(batch)的数据

a=torch.rand(3,226,226)
b=a.unsqueeze(0)

print(a.shape)
print(b.shape)

output

torch.Size([3, 226, 226])
torch.Size([1, 3, 226, 226])

unsqueeze()方法添加一个范围为1的维度,unsqueeze(0)添加为新的第零维度–现在有一个batch,中有一个元素

那么squeeze操作什么意思呢

c=torch.rand(1,1,1,1)
print(c)
a=torch.rand(1,20)
print(a.shape)
print(a)
b=a.squeeze(0)
print(b.shape)
print(b)
c=torch.rand(2,2)
print(c.shape)
d=c.squeeze(0)
print(d.shape)

output

tensor([[[[0.2347]]]])
torch.Size([1, 20])
tensor([[0.1899, 0.4067, 0.1519, 0.1506, 0.9585, 0.7756, 0.8973, 0.4929, 0.2367,
         0.8194, 0.4509, 0.2690, 0.8381, 0.8207, 0.6818, 0.5057, 0.9335, 0.9769,
         0.2792, 0.3277]])
torch.Size([20])
tensor([0.1899, 0.4067, 0.1519, 0.1506, 0.9585, 0.7756, 0.8973, 0.4929, 0.2367,
        0.8194, 0.4509, 0.2690, 0.8381, 0.8207, 0.6818, 0.5057, 0.9335, 0.9769,
        0.2792, 0.3277])
torch.Size([2, 2])
torch.Size([2, 2])

调用squeeze和unsqueeze仅仅可以作用在长度1,否则将会改变张量的形状。
也可以简化广播(请看代码)

a=torch.ones(4,3,2)
c=a*torch.rand(3,1)
print(c)
a=torch.ones(4,3,2)
b=torch.rand( 3)
c=b.unsqueeze(1)# change to a 2-dimensional tensor, adding new dim at the end
print(c.shape)
print(a*c)

output

tensor([[[0.1891, 0.1891],
         [0.3952, 0.3952],
         [0.9176, 0.9176]],

        [[0.1891, 0.1891],
         [0.3952, 0.3952],
         [0.9176, 0.9176]],

        [[0.1891, 0.1891],
         [0.3952, 0.3952],
         [0.9176, 0.9176]],

        [[0.1891, 0.1891],
         [0.3952, 0.3952],
         [0.9176, 0.9176]]])
torch.Size([3, 1])
tensor([[[0.8960, 0.8960],
         [0.4887, 0.4887],
         [0.8625, 0.8625]],

        [[0.8960, 0.8960],
         [0.4887, 0.4887],
         [0.8625, 0.8625]],

        [[0.8960, 0.8960],
         [0.4887, 0.4887],
         [0.8625, 0.8625]],

        [[0.8960, 0.8960],
         [0.4887, 0.4887],
         [0.8625, 0.8625]]])

squeeze和unsqueeze也有其他的版本,squeeze_()和unsqueeze(),相当于保存到原变量的内存。

batch_me=torch.rand(3,226,226)
print(batch_me.shape)
batch_me.unsqueeze_(0)
print(batch_me.shape)

output

torch.Size([3, 226, 226])
torch.Size([1, 3, 226, 226])

有时候我们想改变tensor的形状,如卷积核将产生一个形状为特征数高的输出张量,但下面的线性层期望是一个一维输入,我们可以这样

output3d=torch.rand(6,20,20)
print(output3d.shape)

input1d=output3d.reshape(6*20*20)
print(input1d.shape)

#也可以调用一个方法使用torch模块

print(torch.reshape(output3d, (6 * 20 * 20,)).shape)

output

torch.Size([6, 20, 20])
torch.Size([2400])
torch.Size([2400])

NumPy Bridge

可以将numpy的ndarrays转换为PyTorch的tensor。另一方面由于numpy的默认为64位的小数,转化后pytorch的tensoir也是64位小数。

import numpy as np

numpy_array=np.ones((2,3))
print(numpy_array)

pytorch_tensor=torch.from_numpy(numpy_array)
print(pytorch_tensor)

output

[[1. 1. 1.]
 [1. 1. 1.]]
tensor([[1., 1., 1.],
        [1., 1., 1.]], dtype=torch.float64)

将pytorch的tensor转化为numpy默认的64位的ndarray数据

pytorch_rand=torch.rand(2,3)
print(pytorch_rand)

numpy_rand=pytorch_rand.numpy()
print(numpy_rand)

output

tensor([[0.2853, 0.9091, 0.5695],
        [0.7206, 0.4155, 0.0982]])
[[0.2853077  0.90905803 0.5695162 ]
 [0.7206341  0.41554475 0.09820974]]

由于转化后的数据与原数据占用相同的底层内存,意味着改变一个就是改变另一个

pytorch_rand=torch.rand(2,3)
print(pytorch_rand)

numpy_rand=pytorch_rand.numpy()
print(numpy_rand)

output

tensor([[ 1.,  1.,  1.],
        [ 1., 23.,  1.]], dtype=torch.float64)
[[ 0.2853077   0.90905803  0.5695162 ]
 [ 0.7206341  17.          0.09820974]]`
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值