PyTorch 学习笔记 2 —— About Tensor

Tensor 是 PyTorch 中一种特殊的矩阵存储结构,与 numpy 类似,只不过 tensors 可以使用 GPU 加速运算

1. Tensor Initialization

可以用以下几种方式初始化 tensor:

1. From data

data = [[1, 2], [3, 4]]
x_data = torch.tensor(data)

2. From numpy array

np_array = np.array(data)
x_data = torch.from_numpy(np_array)

3. From another tensor

x_ones = torch.rand_like(x_data, dtype=torch.float)
print(x_data)

4. Rand or const tensor

rand_tensor = torch.rand(size=(2, 3))
ones_tensor = torch.ones(size=(2, 3))
zeros_tensor = torch.zeros(size=(2, 3))
print(rand_tensor)
print(ones_tensor)
print(zeros_tensor)

tensor & numpy

CPU 上的 tensors 与 Numpy 矩阵在物理上共享存储单元,并且可以通过 tensor.numpy()torch.from_numpy() 互相转换:

# convert torch to numpy
t = torch.ones(5)
n = t.numpy()
print(t)	# tensor([1., 1., 1., 1., 1.])
print(n)	# [1. 1. 1. 1. 1.]

# torch and numpy array share the same underlying memory
t.add_(3)
print(t)	# tensor([4., 4., 4., 4., 4.])
print(n)	# [4. 4. 4. 4. 4.]

# convert numpy to torch
n = np.array([1, 1, 1])
t = torch.from_numpy(n)
print(t)	# tensor([1, 1, 1], dtype=torch.int32)
print(n)	# [1. 1. 1.]

2. Tensor Attributes

Tensor Attributes 可以描述 tensor 的 shapedatatype 以及所存储的 device

tensor = torch.randn(size=(2, 3), requires_grad=True, device='cuda')
print(f'Shape: {tensor.shape}')
print(f'Datatype: {tensor.dtype}')
print(f'Device tensor stored on: {tensor.device}')

# Shape: torch.Size([2, 3])
# Datatype: torch.float32
# Device tensor stored on: cuda:0

3. Tensor Operations

Over 100 tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random sampling, and more are comprehensively described here.

Each of them can be run on the GPU (at typically higher speeds than on a CPU). If you’re using Colab, allocate a GPU by going to Edit > Notebook Settings.

使用 GPU 运算 tensors:

device = 'cuda' if torch.cuda.is_available() else 'cpu'
tensor1 = torch.rand(size=(2, 3))
tensor2 = tensor1.to(device)
print(f'Device tensor stored on: {tensor2.device}')
# Device tensor stored on: cuda:0

1. Standard numpy-like indexing and slicing

将 index=1 的列置为零:

tensor = torch.ones(3, 3)
tensor[:, 1] = 0
print(tensor)

2. Joining tensors

通过 torch.cat([t1, t2, t3], dim=1) 可以按列 (dim=1 指定按第二个维度列合并,dim=0 为按第一个维度行合并) 合并多个 tensors

tensor = torch.ones(3, 3)
t1 = torch.cat([tensor, tensor], dim=0)
print(t1.shape)		# torch.Size([6, 3])
print(t1)

t2 = torch.cat([tensor, tensor, tensor], dim=1)
print(t2.shape)		# torch.Size([3, 9])

3. Multiplying tensors

乘法包含元素乘和矩阵乘:

  • 元素乘:*t1.mul(t2)
  • 矩阵乘:t1.matmul(t2)t1 @ t2
# element-wise product
t1 = torch.ones(2, 2)
print(t1.mul(t1))
print(t1 * t1)

# matrix multiplication
t1 = torch.ones(size=(2, 3))
print(t1.matmul(t1.T))
print(t1 @ t1.T)

其中,T 代表矩阵转置

注意:numpy 中,元素乘为 *,矩阵乘为 np.dot(A, B)A.dot(B)np.matmul(A, B)


4. In-place operations

带有 _ 后缀的运算符为 in-place 运算符:

tensor = torch.rand(2, 2)
tensor.add_(1)					# add
print(tensor)
tensor.copy_(torch.rand(2, 2))	# copy
print(tensor)

以上就是全部内容啦 ~

更多参考 PyTorch 学习笔记


REFERENCE: Tensors

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值