pytorch张量_PyTorch张量-详细概述

pytorch张量

In this PyTorch tutorial, we’ll discuss PyTorch Tensor, which are the building blocks of this Deep Learning Framework.

在本PyTorch教程中,我们将讨论PyTorch Tensor ,这是此深度学习框架的构建基块。

Let’s get started!

让我们开始吧!



PyTorch张量 (PyTorch Tensor)

Have you worked with Python numpy before? If yes, then this section is going to be very simple for you! Even if you don’t have experience with numpy, you can seamlessly transition between PyTorch and NumPy!

您以前使用过Python numpy吗? 如果是,那么本节对您来说将非常简单! 即使您没有使用numpy的经验,也可以在PyTorch和NumPy之间无缝转换!

A Tensor in PyTorch is similar to numpy arrays, with the additional flexibility of using a GPU for calculations.

PyTorch中的Tensor与numpy数组相似,具有使用GPU进行计算的额外灵活性。

1. 2D Pytorch张量 (1. 2D Pytorch Tensor)

Imagine a tensor as an array of numbers, with a potentially arbitrary number of dimensions. The only difference between a Tensor and a multidimensional array in C/C++/Java is that the size of all the columns in a dimension is the same.

想象一个张量为一个数字数组,其维数可能是任意的。 在C / C ++ / Java中,张量和多维数组之间的唯一区别是,维中所有列的大小都相同。

For example, the below can be a valid representation of a 2 Dimensional Tensor.

例如,以下内容可以是二维张量的有效表示。


[[1 2 3 4],
 [5 6 7 8]]

Note, however, that the below example is NOT a valid example, since Tensors are not jagged arrays.

但是请注意,以下示例不是有效示例,因为张量不是锯齿状数组。


[[1 2 3 4],
 [5 6 7]]

PyTorch Tensors are really convenient for programmers, since they are almost the same as numpy arrays.

PyTorch张量实际上对程序员很方便,因为它们几乎与numpy数组相同。

There are a couple of differences to numpy methods, though, so it is advised that you also refer the official Documentation for further information.

但是, numpy方法有一些区别,因此建议您也参考官方文档以获取更多信息。

2.初始化一个空的PyTorch张量 (2. Initializing an Empty PyTorch Tensor)

Let’s consider the below example, which initializes an empty Tensor.

让我们考虑以下示例,该示例初始化一个空的Tensor。


import torch 
# Creates a 3 x 2 matrix which is empty
a = torch.empty(3, 2)

An empty tensor does NOT mean that it does not contain anything. It’s just that there is memory allocated for it.

空张量并不意味着它不包含任何东西。 只是为其分配了内存。


import torch 
# Creates a 3 x 2 matrix which is empty
a = torch.empty(3, 2)
print(a)

# Create a zero initialized float tensor
b = torch.zeros(3, 2, dtype=torch.float32)
print(b)

Output

输出量


tensor([[3.4655e-37, 0.0000e+00],
        [4.4842e-44, 0.0000e+00],
        [       nan, 6.1657e-44]])
tensor([[0., 0.],
        [0., 0.],
        [0., 0.]])

The first tensor is a result of PyTorch simply allocating memory for the tensor. Whatever previous content in the memory is not erased.

第一个张量是PyTorch只是为张量分配内存的结果。 存储器中的所有先前内容均不会被擦除。

The second tensor is filled with zeros, since PyTorch allocates memory and zero-initializes the tensor elements.

由于PyTorch会分配内存并将张量元素零初始化,因此第二张量将填充零。

Notice the similarity to numpy.empty() and numpy.zeros(). This is because PyTorch is designed to replace numpy, since the GPU is available.

注意与numpy.empty()numpy.zeros()的相似性。 这是因为PyTorch旨在替代numpy ,因为GPU可用。

3.查找PyTorch张量大小 (3. Finding PyTorch Tensor Size)

Let’s create a basic tensor and determine its size.

让我们创建一个基本的张量并确定其大小。


import torch 
# Create a tensor from data
c = torch.tensor([[3.2 , 1.6, 2], [1.3, 2.5 , 6.9]])
print(c)

Output

输出量


tensor([[3.2000, 1.6000, 2.0000],
        [1.3000, 2.5000, 6.9000]])

To get the size of the tensor, we can use tensor.size()

要获得张量的大小,我们可以使用tensor.size()


print(c.size())

Output

输出量


torch.Size([2, 3])


PyTorch张量操作 (PyTorch Tensor Operations)

Like numpy, PyTorch supports similar tensor operations.

numpy一样,PyTorch支持类似的张量操作。

The summary is given in the below code block.

摘要在下面的代码块中给出。

1.关于张量的基本数学运算 (1. Basic Mathematical Operations on Tensors)


import torch 
# Tensor Operations
x = torch.tensor([[2, 3, 4], [5, 6, 7]])
y = torch.tensor([[2, 3, 4], [1.3, 2.6, 3.9]])

# Addition
print(x + y)
# We can also use torch.add()
print(x + y == torch.add(x, y))

# Subtraction
print(x - y)
# We can also use torch.sub()
print(x-y == torch.sub(x, y))

Output

输出量


tensor([[ 4.0000,  6.0000,  8.0000],
        [ 6.3000,  8.6000, 10.9000]])
tensor([[True, True, True],
        [True, True, True]])
tensor([[0.0000, 0.0000, 0.0000],
        [3.7000, 3.4000, 3.1000]])
tensor([[True, True, True],
        [True, True, True]])

We can also assign the result to a tensor. Add the following code snippet to the code above.

我们还可以将结果分配给张量。 将以下代码片段添加到上面的代码中。


# We can assign the output to a tensor
z = torch.zeros(x.shape)
torch.add(x, y, out=z)
print(z)

Output

输出量


tensor([[ 4.0000,  6.0000,  8.0000],
        [ 6.3000,  8.6000, 10.9000]])

2.使用PyTorch张量进行内联加减法 (2. Inline Addition and Subtraction with PyTorch Tensor)

PyTorch also supports in-place operations like addition and subtraction, when suffixed with an underscore (_). Let’s continue on with the same variables from the operations summary code above.

当带有下划线(_)后缀时,PyTorch还支持就地操作,例如加法和减法。 让我们继续上面操作摘要代码中的相同变量。


# In-place addition
print('Before In-Place Addition:', y)
y.add_(x)
print('After addition:', y)

Output

输出量


Before In-Place Addition: tensor([[2.0000, 3.0000, 4.0000],
        [1.3000, 2.6000, 3.9000]])
After addition: tensor([[ 4.0000,  6.0000,  8.0000],
        [ 6.3000,  8.6000, 10.9000]])

3.访问张量索引 (3. Accessing Tensor Index)

We can also use numpy based indexing in PyTorch

我们还可以在PyTorch中使用基于numpy的索引


# Use numpy slices for indexing
print(y[:, 1]

Output

输出量


tensor([6.0000, 8.6000])


重塑PyTorch张量 (Reshape a PyTorch Tensor)

Similar to numpy, we can use torch.reshape() to reshape a tensor. We can also use tensor.view() to achieve the same functionality.

numpy相似,我们可以使用torch.reshape()重塑张量。 我们还可以使用tensor.view()实现相同的功能。


import torch 
x = torch.randn(5, 3)
# Return a view of the x, but only having 
# one dimension
y = x.view(5 * 3)

print('Size of x:', x.size())
print('Size of y:', y.size())

print(x)
print(y)

# Get back the original tensor with reshape()
z = y.reshape(5, 3)
print(z)

Output

输出量


Size of x: torch.Size([5, 3])
Size of y: torch.Size([15])

tensor([[ 0.3224,  0.1021, -1.4290],
        [-0.3559,  0.2912, -0.1044],
        [ 0.3652,  2.3112,  1.4784],
        [-0.9630, -0.2499, -1.3288],
        [-0.0667, -0.2910, -0.6420]])

tensor([ 0.3224,  0.1021, -1.4290, -0.3559,  0.2912, -0.1044,  0.3652,  2.3112,
         1.4784, -0.9630, -0.2499, -1.3288, -0.0667, -0.2910, -0.6420])

tensor([[ 0.3224,  0.1021, -1.4290],
        [-0.3559,  0.2912, -0.1044],
        [ 0.3652,  2.3112,  1.4784],
        [-0.9630, -0.2499, -1.3288],
        [-0.0667, -0.2910, -0.6420]])

The list of all Tensor Operations is available in PyTorch’s Documentation.

所有Tensor操作的列表可在PyTorch的文档中找到

PyTorch – NumPy桥 (PyTorch – NumPy Bridge )

We can convert PyTorch tensors to numpy arrays and vice-versa pretty easily.

我们可以很容易地将PyTorch张量转换为numpy数组,反之亦然。

PyTorch is designed in such a way that a Torch Tensor on the CPU and the corresponding numpy array will have the same memory location. So if you change one of them, the other one will automatically be changed.

PyTorch的设计方式是, CPU上的Torch张量和相应的numpy数组将具有相同的内存位置。 因此,如果您更改其中一个,则另一个将自动更改。

To prove this, let’s test it using the torch.numpy() and the torch.from_numpy() methods.

为了证明这一点,让我们使用torch.numpy()torch.from_numpy()方法对其进行测试。

torch.numpy() is used to convert a Tensor to a numpy array, and torch.from_numpy() will do the reverse.

torch.numpy()用于将Tensor转换为numpy数组,而torch.from_numpy()将执行相反的操作。


import torch 
# We also need to import numpy to declare numpy arrays
import numpy as np

a = torch.tensor([[1, 2, 3], [4, 5, 6]])
print('Original Tensor:', a)

b = a.numpy()
print('Tensor to a numpy array:', b)

# In-Place addition (add 2 to every element)
a.add_(2)

print('Tensor after addition:', a)

print('Numpy Array after addition:', b)

Output

输出量


Original Tensor: tensor([[1, 2, 3],
        [4, 5, 6]])
Tensor to a numpy array: [[1 2 3]
 [4 5 6]]
Tensor after addition: tensor([[3, 4, 5],
        [6, 7, 8]])
Numpy Array after addition: [[3 4 5]
 [6 7 8]]

Indeed, the numpy array has also changed it’s value!

确实,numpy数组也改变了它的价值!

Let’s do the reverse as well

让我们也做相反的事情


import torch
import numpy as np

c = np.array([[4, 5, 6], [7, 8, 9]])
print('Numpy array:', c)

# Convert to a tensor
d = torch.from_numpy(c)
print('Tensor from the array:', d)

# Add 3 to each element in the numpy array
np.add(c, 3, out=c)

print('Numpy array after addition:', c)

print('Tensor after addition:', d)

Output

输出量


Numpy array: [[4 5 6]
 [7 8 9]]
Tensor from the array: tensor([[4, 5, 6],
        [7, 8, 9]])
Numpy array after addition: [[ 7  8  9]
 [10 11 12]]
Tensor after addition: tensor([[ 7,  8,  9],
        [10, 11, 12]])

NOTE: If you do not use the numpy in-place addition using a += 3 or np.add(out=a), then the Tensor will not reflect the changes in the numpy array.

注意 :如果您不通过a += 3np.add(out=a)使用numpy就地加法,那么Tensor将不会反映numpy数组中的更改。

For example, if you try this:

例如,如果您尝试这样做:


c = np.add(c, 3)

Since you’re using =, this means that Python will create a new object and assign that new object to the name called c. So the original memory location is still unchanged.

由于您使用的是= ,这意味着Python将创建一个新对象并将该新对象分配给名为c的名称。 因此,原始内存位置仍保持不变。

将CUDA GPU与PyTorch张量配合使用 (Use the CUDA GPU with a PyTorch Tensor)

We can make the NVIDIA CUDA GPU perform the computations and have a speedup, by moving the tensor to the GPU.

通过将张量移动到GPU,我们可以使NVIDIA CUDA GPU执行计算并提高速度。

NOTE: This applies only if you have an NVIDIA GPU with CUDA enabled. If you’re not sure of what these terms are, I would advise you to search online.

注意:仅当您具有启用了CUDA的NVIDIA GPU时,此选项才适用。 如果您不确定这些术语是什么,我建议您在线搜索。

We can check if we have the GPU available for PyTorch using torch.cuda.is_available()

我们可以使用torch.cuda.is_available()检查是否有可用于PyTorch的GPU


import torch 
if torch.cuda.is_available():
    print('Your device is supported. We can use the GPU for PyTorch!')
else:
    print('Your GPU is either not supported by PyTorch or you haven't installed the GPU version')

For me, it is available, so just make sure you install CUDA before proceeding further if your laptop supports it.

对我来说,它是可用的,因此只要您的笔记本电脑支持,请确保先安装CUDA,然后再继续。

We can move a tensor from the CPU to the GPU using tensor.to(device), where device is a device object.

我们可以使用tensor.to(device)将张量从CPU移至GPU,其中device是设备对象。

This can be torch.device("cuda"), or simply cpu.

这可以是torch.device("cuda") ,也可以只是cpu


import torch 
x = torch.tensor([1, 2, 3], dtype=torch.long)

if torch.cuda.is_available():
    print('CUDA is available')
    # Create a CUDA Device object
    device = torch.device("cuda")

    # Create a tensor from x and store on the GPU
    y = torch.ones_like(x, device=device)
    
    # Move the tensor from CPU to GPU
    x = x.to(device)

    # This is done on the GPU
    z = x + y
    print(z)

    # Move back to CPU and also change dtype
    print(z.to("cpu", torch.double))
    print(z)
else:
    print('CUDA is not available')

Output

输出量


CUDA is available
tensor([2, 3, 4], device='cuda:0')
tensor([2., 3., 4.], dtype=torch.float64)
tensor([2, 3, 4], device='cuda:0')

As you can see, the output does show that our program is now being run on the GPU instead!

如您所见,输出确实显示我们的程序现在正在GPU上运行!



结论 (Conclusion)

In this article, we learned about using Tensors in PyTorch. Feel free to ask any doubts or even suggestions/corrections in the comment section below!

在本文中,我们了解了如何在PyTorch中使用Tensors。 随时在下面的评论部分中提出任何疑问甚至建议/纠正!

We’ll be covering more in our upcoming PyTorch tutorials. Stay tuned!

我们将在即将开始的PyTorch教程中介绍更多内容 。 敬请关注!



参考资料 (References)



翻译自: https://www.journaldev.com/37948/pytorch-tensor

pytorch张量

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值