PyTorch-Tutorials【pytorch官方教程中英文详解】- 2 Tensors

72 篇文章 28 订阅
28 篇文章 17 订阅
本文介绍了PyTorch中张量的基础概念,如何创建、操作和理解其属性。重点讲解了张量与NumPy的联系,以及如何利用张量进行算术运算、索引和GPU加速。同时提到了与NumPy数组的交互,以及在自动梯度计算中的角色。
摘要由CSDN通过智能技术生成

【2021,有人见尘埃,有人见星辰,不过没关系,马上都要翻篇了。--2021.12.31】

在文章PyTorch-Tutorials【pytorch官方教程中英文详解】- 1 Quickstart中是快速介绍版本。接下来具体看看pytorch中的重要概念:Tensor(张量)

官网链接:Tensors — PyTorch Tutorials 1.10.1+cu102 documentation

Tensors are a specialized data structure that are very similar to arrays and matrices. In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters.

张量是一种特殊的数据结构,与数组和矩阵非常相似。在PyTorch中,我们使用张量来编码模型的输入和输出,以及模型的参数。】

Tensors are similar to NumPy’s ndarrays, except that tensors can run on GPUs or other hardware accelerators. In fact, tensors and NumPy arrays can often share the same underlying memory, eliminating the need to copy data (see Bridge with NumPy). Tensors are also optimized for automatic differentiation (we’ll see more about that later in the Autograd section). If you’re familiar with ndarrays, you’ll be right at home with the Tensor API. If not, follow along!

张量与NumPy的ndarrays类似,只是张量可以在gpu或其他硬件加速器上运行。事实上,张量数组和NumPy数组通常可以共享相同的底层内存,从而消除了复制数据的需要(参见Bridge with NumPy)。张量也为自动差分而优化(我们将在Autograd一节后面详细介绍)。如果你熟悉ndarrays,那么你对张量API就很熟悉了。如果没有,那就跟着做 !】

import torch
import numpy as np

1 Initializing a Tensor

Tensors can be initialized in various ways. Take a look at the following examples:

【张量可以用各种方式初始化。看看下面的例子:】

(a)Directly from data

Tensors can be created directly from data. The data type is automatically inferred.

【张量可以直接从数据中创建。数据类型被自动推断出来。】

data = [[1, 2],[3, 4]]
x_data = torch.tensor(data)

(b)From a NumPy array

Tensors can be created from NumPy arrays (and vice versa - see Bridge with NumPy).

【张量可以从NumPy数组中创建(反之亦然——参见Bridge with NumPy)。】

np_array = np.array(data)
x_np = torch.from_numpy(np_array)

(c)From another tensor:

The new tensor retains the properties (shape, datatype) of the argument tensor, unless explicitly overridden.

【新张量保留了参数张量的属性(形状、数据类型),除非显式地重写。】

x_ones = torch.ones_like(x_data) # retains the properties of x_data
print(f"Ones Tensor: \n {x_ones} \n")

x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data
print(f"Random Tensor: \n {x_rand} \n")

输出结果:

Ones Tensor:
 tensor([[1, 1],
        [1, 1]])

Random Tensor:
 tensor([[0.4557, 0.7406],
        [0.5935, 0.1859]])

(d)With random or constant values:

shape is a tuple of tensor dimensions. In the functions below, it determines the dimensionality of the output tensor.

【形状是张量维的元组。在下面的函数中,它决定了输出张量的维数。】

shape = (2,3,)
rand_tensor = torch.rand(shape)
ones_tensor = torch.ones(shape)
zeros_tensor = torch.zeros(shape)

print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor}")

输出结果  :

Random Tensor:
 tensor([[0.8012, 0.4547, 0.4156],
        [0.6645, 0.1763, 0.3860]])

Ones Tensor:
 tensor([[1., 1., 1.],
        [1., 1., 1.]])

Zeros Tensor:
 tensor([[0., 0., 0.],
        [0., 0., 0.]])

2 Attributes of a Tensor

Tensor attributes describe their shape, datatype, and the device on which they are stored.

【张量属性描述了它们的形状、数据类型和存储它们的设备。】

tensor = torch.rand(3,4)

print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")

输出:

Shape of tensor: torch.Size([3, 4])
Datatype of tensor: torch.float32
Device tensor is stored on: cpu

3 Operations on Tensors

Over 100 tensor operations, including arithmetic, linear algebra, matrix manipulation (transposing, indexing, slicing), sampling and more are comprehensively described here.

【超过100个张量操作,包括算术,线性代数,矩阵操作(转置,索引,切片),采样和更多的全面描述在这里。】

Each of these operations can be run on the GPU (at typically higher speeds than on a CPU). If you’re using Colab, allocate a GPU by going to Runtime > Change runtime type > GPU.

【这些操作都可以在GPU上运行(通常比在CPU上运行速度更快)。如果你使用的是Colab,通过Runtime>Change runtime type> GPU分配一个GPU。】

By default, tensors are created on the CPU. We need to explicitly move tensors to the GPU using .to method (after checking for GPU availability). Keep in mind that copying large tensors across devices can be expensive in terms of time and memory!

默认情况下,张量是在CPU上创建的。我们需要使用.to方法显式地将张量移动到GPU(在检查GPU可用性之后)。请记住,从时间和内存方面来说,在设备之间复制大型张量是非常昂贵的!

# We move our tensor to the GPU if available
if torch.cuda.is_available():
    tensor = tensor.to('cuda')

Try out some of the operations from the list. If you’re familiar with the NumPy API, you’ll find the Tensor API a breeze to use.

【尝试列表中的一些操作。如果您熟悉NumPy API,您会发现使用张量API很容易。】

(a)Standard numpy-like indexing and slicing:

【标准的numpy类索引和切片:】

tensor = torch.ones(4, 4)
print('First row: ', tensor[0])
print('First column: ', tensor[:, 0])
print('Last column:', tensor[..., -1])
tensor[:,1] = 0
print(tensor)

输出结果:

First row:  tensor([1., 1., 1., 1.])
First column:  tensor([1., 1., 1., 1.])
Last column: tensor([1., 1., 1., 1.])
tensor([[1., 0., 1., 1.],
        [1., 0., 1., 1.],
        [1., 0., 1., 1.],
        [1., 0., 1., 1.]])

Joining tensors You can use torch.cat to concatenate a sequence of tensors along a given dimension. See also torch.stack, another tensor joining op that is subtly different from torch.cat.

【你可以使用torch.cat将一系列张量沿着给定的维度连接起来。参见torch.stack,它是另一个张量连接op,与torch.cat稍有不同。】

t1 = torch.cat([tensor, tensor, tensor], dim=1)
print(t1)

输出:

tensor([[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
        [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
        [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
        [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.]])

(b)Arithmetic operations

【算数运算】

# This computes the matrix multiplication between two tensors. y1, y2, y3 will have the same value
y1 = tensor @ tensor.T
y2 = tensor.matmul(tensor.T)

y3 = torch.rand_like(tensor)
torch.matmul(tensor, tensor.T, out=y3)


# This computes the element-wise product. z1, z2, z3 will have the same value
z1 = tensor * tensor
z2 = tensor.mul(tensor)

z3 = torch.rand_like(tensor)
torch.mul(tensor, tensor, out=z3)

(c)Single-element tensors

【单元素张量】

If you have a one-element tensor, for example by aggregating all values of a tensor into one value, you can convert it to a Python numerical value using item():

【如果你有一个单元素张量,例如将一个张量的所有值聚合成一个值,你可以使用item()将其转换为Python数值:】

agg = tensor.sum()
agg_item = agg.item()
print(agg_item, type(agg_item))

输出:

12.0 <class 'float'>

(d)In-place operations 

Operations that store the result into the operand are called in-place. They are denoted by a _ suffix. For example: x.copy_(y), x.t_(), will change x.

【将结果存储到操作数(操作对象)中的操作称为就地操作。它们由_后缀表示。例如:x.copy_(y), x.t_(),将改变x。】

print(tensor, "\n")
tensor.add_(5)
print(tensor)

结果:

tensor([[1., 0., 1., 1.],
        [1., 0., 1., 1.],
        [1., 0., 1., 1.],
        [1., 0., 1., 1.]])

tensor([[6., 5., 6., 6.],
        [6., 5., 6., 6.],
        [6., 5., 6., 6.],
        [6., 5., 6., 6.]])

NOTE

In-place operations save some memory, but can be problematic when computing derivatives because of an immediate loss of history. Hence, their use is discouraged.

【注意:就地操作可以节省一些内存,但在计算导数时可能会出现问题,因为会立即丢失历史记录。因此,不鼓励使用它们。】

4 Bridge with NumPy

Tensors on the CPU and NumPy arrays can share their underlying memory locations, and changing one will change the other.

【CPU上的张量和NumPy数组可以共享它们的底层内存位置,改变其中一个就会改变另一个。】

Tensor to NumPy array

t = torch.ones(5)
print(f"t: {t}")
n = t.numpy()
print(f"n: {n}")

结果:

t: tensor([1., 1., 1., 1., 1.])
n: [1. 1. 1. 1. 1.]

A change in the tensor reflects in the NumPy array.

【张量的变化反映在NumPy数组中。】

t.add_(1)
print(f"t: {t}")
print(f"n: {n}")

输出:

t: tensor([2., 2., 2., 2., 2.])
n: [2. 2. 2. 2. 2.]

NumPy array to Tensor

n = np.ones(5)
t = torch.from_numpy(n)

Changes in the NumPy array reflects in the tensor.

np.add(n, 1, out=n)
print(f"t: {t}")
print(f"n: {n}")

结果:

t: tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
n: [2. 2. 2. 2. 2.]

说明:记录学习笔记,如果错误欢迎指正!写文章不易,转载请联系我。

  • 3
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值