pytorch张量的常见API

《pytorch》

链接:pytorch官网

1.tensor的创建

1.1 具体数值的tensor创建

基本上与numpy类似

1.# 具体data直接转化
torch.tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False) → Tensor
np.array()/np.asarray()
>>> torch.tensor([[0.11111, 0.222222, 0.3333333]],
...              dtype=torch.float64,
...              device=torch.device('cuda:0'))  # creates a double tensor on a CUDA device
tensor([[ 0.1111,  0.2222,  0.3333]], dtype=torch.float64, device='cuda:0')
2.# numpy转tensor
torch.from_numpy(ndarray) → Tensor
>>> a = numpy.array([1, 2, 3])
>>> t = torch.from_numpy(a)
>>> t
tensor([ 1,  2,  3])
>>> t[0] = -1
>>> a
array([-1,  2,  3])
3.
#全0,这里比numpy要宽松一些,因为size是整数序列或者list,tuple,而numpy里面的size一定要是元组形式或者列表形式
torch.zeros(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
# 全1
torch.ones(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
# 全为空的,没接触过具体用法
torch.empty(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False, memory_format=torch.contiguous_format) → Tensor
# 对角线全为1(若是方阵就是单位阵),参数列表稍微与上面不同
torch.eye(n, m=None, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) 
→ Tensor
# 用具体的某个value赋值
torch.full(size, fill_value, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

=============================================
前三个的size
size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple.
==================================================
full的size
size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor.
>>> torch.zeros(2, 3)
tensor([[ 0.,  0.,  0.],
        [ 0.,  0.,  0.]])
>>> torch.zeros((2, 3))
tensor([[0., 0., 0.],
        [0., 0., 0.]])
>>> torch.eye(4,5)
tensor([[1., 0., 0., 0., 0.],
        [0., 1., 0., 0., 0.],
        [0., 0., 1., 0., 0.],
        [0., 0., 0., 1., 0.]])
>>> torch.full((2,3),5)
tensor([[5, 5, 5],
        [5, 5, 5]])
4.torch.zeros_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor
====================================================
is equivalent to torch.zeros(input.size(), out) # 后续torch.ones_like不写了
5.# 和python内置的range类似
torch.arange(start=0, end, step=1, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
# [start, end) 
# torch.range() 是[start, end],numpy里面没有,设计的也比较反人类,后续不写
6. # linspace
torch.linspace(start, end, steps, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
# 和numpy一样,[strat,end]
>>> torch.linspace(1,8,10)
tensor([1.0000, 1.7778, 2.5556, 3.3333, 4.1111, 4.8889, 5.6667, 6.4444, 7.2222,
        8.0000])
7.# logspace
torch.logspace(start, end, steps, base=10.0, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
# 和numpy一样
>>> torch.logspace(1,10,10)
tensor([1.0000e+01, 1.0000e+02, 1.0000e+03, 1.0000e+04, 1.0000e+05, 1.0000e+06,
        1.0000e+07, 1.0000e+08, 1.0000e+09, 1.0000e+10])

1.2 随机数值的tensor创建

也基本上与numpy的random里的创建函数类似

1. #均匀分布[0,1)
torch.rand(*size, *, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) → Tensor
#Returns a tensor filled with random numbers from a uniform distribution on the interval [0,1)
#同样是均匀分布,和上面的size类似,同样可以做到int、list和tuple,numpy里面的只有list和tuple
>>> torch.rand((2,2))
tensor([[0.8793, 0.3901],
        [0.8258, 0.9897]])
>>> torch.rand(2,2)
tensor([[0.9329, 0.3896],
        [0.4102, 0.7895]])
2. # 均匀分布[low,high),注意这里的size写法要变了,只有list和tuple
torch.randint(low=0, high, size, \*, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
3.# 正太分布 mu=0 sigma=1
torch.randn(*size, *, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) → Tensor

没有像numpy.random.normal

想要获得只能自己×

# 获得均值为3,标准差为5的正太分布
torch.randn((3,3)) * 5 + 3
4. # 随机返回一个整型序列[0,n)
torch.randperm(n, *, generator=None, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) → Tensor

2.tensor的基本属性

1.dtype 如果不是整形,一般是默认float32

2.shape

3.常用API

3.1 torch.is_tensor

1.# 判断是否是tensor 和python内置的函数isinstance(obj,Tensor)作用相同
torch.is_tensor(obj)
>>> a = torch.zeros(2,2) + 1
>>> a
tensor([[1., 1.],
        [1., 1.]])
>>> torch.is_tensor(a)
True

3.2 torch.numel

2.# 计数,统计这个tensor有多少个元素
torch.numel(input)int
=================
input (Tensor) – the input tensor.

3.3 拼接+堆叠

3.3.1 torch.cat (拼接)

3. # 拼接,和np.concatenate基本上一模一样,注意与stack区分,stack有例子,维度不变
# (注意这里的维度不变是指不会(2,4)--->(2,2,4),他可能是这样的(2,4)--->(4,4))
torch.cat(tensors, dim=0, *, out=None) → Tensor
torch.concat(tensors, dim=0, *, out=None) → Tensor # Alias of torch.cat().
torch.concatenate(tensors, axis=0, out=None) → Tensor # Alias of torch.cat().
>>> a = torch.arange(0,10).reshape(2,5)
>>> a
tensor([[0, 1, 2, 3, 4],
        [5, 6, 7, 8, 9]])
>>> b = torch.arange(100,105).reshape(1,5)
>>> b
tensor([[100, 101, 102, 103, 104]])
>>> c = torch.tensor([888,999]).reshape(1,2).T
>>> c
tensor([[888],
        [999]])
>>> torch.cat((a,b),0)
tensor([[  0,   1,   2,   3,   4],
        [  5,   6,   7,   8,   9],
        [100, 101, 102, 103, 104]])
>>> torch.cat((a,c),1)
tensor([[  0,   1,   2,   3,   4, 888],
        [  5,   6,   7,   8,   9, 999]])

3.3.2 torch.stack(堆叠)

# Concatenates a sequence of tensors along a new dimension.
# 拼接,要求tensors的形状要一致
# 维度增加
# (注意这里的维度增加不是指(2,4)--->(4,4),而是指(2,4)--->(2,2,4))
4.torch.stack(tensors, dim=0, *, out=None) → Tensor
>>> a = torch.arange(0,10).reshape(2,5)
>>> a
tensor([[0, 1, 2, 3, 4],
        [5, 6, 7, 8, 9]])
>>> b = torch.arange(100,110).reshape(2,5)
>>> b
tensor([[100, 101, 102, 103, 104],
        [105, 106, 107, 108, 109]])
=========================================
>>> torch.stack((a,b),0)
tensor([[[  0,   1,   2,   3,   4],
         [  5,   6,   7,   8,   9]],

        [[100, 101, 102, 103, 104],
         [105, 106, 107, 108, 109]]])
>>> torch.stack((a,b),0).shape
torch.Size([2, 2, 5])
========================================
>>> torch.concatenate((a,b),0)
tensor([[  0,   1,   2,   3,   4],
        [  5,   6,   7,   8,   9],
        [100, 101, 102, 103, 104],
        [105, 106, 107, 108, 109]])
>>> torch.concatenate((a,b),0).shape
torch.Size([4, 5])
=======================================
>>> torch.stack((a,b),1)
tensor([[[  0,   1,   2,   3,   4],
         [100, 101, 102, 103, 104]],

        [[  5,   6,   7,   8,   9],
         [105, 106, 107, 108, 109]]])
>>> torch.stack((a,b),1).shape
torch.Size([2, 2, 5])
========================================
>>> torch.concatenate((a,b),1)
tensor([[  0,   1,   2,   3,   4, 100, 101, 102, 103, 104],
        [  5,   6,   7,   8,   9, 105, 106, 107, 108, 109]])
>>> torch.concatenate((a,b),1).shape
torch.Size([2, 10])

3.3.3 torch.hstack (是和concatenate类似)

# Stack tensors in sequence horizontally(水平地) (column wise).
# axis = 0 / 1,不会增加维度
5.torch.hstack(tensors, *, out=None) → Tensor

This is equivalent to concatenation along the first axis for 1-D tensors, and along the second axis for all other tensors.

等价于:若是一维tensor,torch.stack(tensors,dim=0);其它tensor,则是torch.stack(tensors,dim=1)

>>> c = torch.arange(0,10)
>>> c
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> d = torch.arange(100,110)
>>> d
tensor([100, 101, 102, 103, 104, 105, 106, 107, 108, 109])
>>> torch.hstack((c,d))
tensor([  0,   1,   2,   3,   4,   5,   6,   7,   8,   9, 100, 101, 102, 103,
        104, 105, 106, 107, 108, 109])
==================================================
>>> a
tensor([[0, 1, 2, 3, 4],
        [5, 6, 7, 8, 9]])
>>> b
tensor([[100, 101, 102, 103, 104],
        [105, 106, 107, 108, 109]])
>>> torch.hstack((a,b))
tensor([[  0,   1,   2,   3,   4, 100, 101, 102, 103, 104],
        [  5,   6,   7,   8,   9, 105, 106, 107, 108, 109]])

3.3.4 torch.dstack (没怎么见过实际应用)

#This is equivalent to concatenation along the third axis after 1-D and 2-D tensors have been reshaped by torch.atleast_3d().见下方
先拓展到三维,然后concatenate((tensor1,tensor2...),dim=2)
6.torch.dstack(tensors, *, out=None) → Tensor
===========
>>> a = torch.tensor([1, 2, 3]) # 拓展到三维是[[[1]],[[2]],[[3]]]
>>> b = torch.tensor([4, 5, 6])
>>> torch.dstack((a,b))
tensor([[[1, 4],
         [2, 5],
         [3, 6]]])
========等价于=========
>>> a = torch.tensor([1, 2, 3])
>>> a
tensor([1, 2, 3])
>>> a = torch.atleast_3d(a)
>>> a
tensor([[[1],
         [2],
         [3]]])
>>> b = torch.tensor([4, 5, 6])
>>> b = torch.atleast_3d(b)
>>> b
tensor([[[4],
         [5],
         [6]]])
>>> torch.concatenate((a,b),2)
tensor([[[1, 4],
         [2, 5],
         [3, 6]]])
6.5 torch.atleast_3d(*tensors)[SOURCE]
# Returns a 3-dimensional view of each input tensor with zero dimensions. Input tensors with three or more dimensions are returned as-is.
=========
>>> x = torch.tensor(0.5)
>>> x
tensor(0.5000)
>>> torch.atleast_3d(x)
tensor([[[0.5000]]])
===
>>> y = torch.arange(4).view(2, 2)
>>> y
tensor([[0, 1],
        [2, 3]])
>>> torch.atleast_3d(y)
tensor([[[0],
         [1]],

        [[2],
         [3]]])

3.3.5 torch.vstack

This is equivalent to concatenation along the first axis after all 1-D tensors have been reshaped by torch.atleast_2d().

和dstack类似

将tensor拓展到2维

后concatenate((t1,t2…),dim=1)

3.4 分割

3.4.1 torch.chunk

7. # 分割,可以是不均匀的,不够的部分最后一个tensor的形状会小
   # np.split() 一定要是可分的
torch.chunk(input, chunks, dim=0) → List of Tensors
================================================
input (Tensor) – the tensor to split
chunks (int) – number of chunks to return
>>> a = torch.randint(0,10,(3,2))
>>> a
tensor([[4, 9],
        [6, 0],
        [9, 3]])
>>> x,y = torch.chunk(a,2,dim=0)
>>> x
tensor([[4, 9],
        [6, 0]])
>>> y
tensor([[9, 3]])

3.4.2 torch.split

8.torch.split(tensor, split_size_or_sections, dim=0)[SOURCE]

Splits the tensor into chunks. Each chunk is a view of the original tensor.

If split_size_or_sections is an integer type, then tensor will be split into equally sized chunks (if possible). Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by split_size.

If split_size_or_sections is a list, then tensor will be split into len(split_size_or_sections) chunks with sizes in dim according to split_size_or_sections.

3.5 维度操作

3.5.1 torch.permute

一般用来做(h,w,c)===>(c,h,w)

9.# 维度翻转 np里面是np.transpose()
torch.permute(input, dims) → Tensor
======
- input (Tensor) – the input tensor.
- dims (tuple of int) – The desired ordering of dimensions
>>> a = torch.arange(0,27).reshape(3,3,3)
>>> a
tensor([[[ 0,  1,  2],  # 可以将[0,1,2]看作是rgb,[[0,1,2],[3,4,5],[6,7,8]]是图像第一行的像素点的RGB信息(h=1)
         [ 3,  4,  5],
         [ 6,  7,  8]],

        [[ 9, 10, 11],
         [12, 13, 14],
         [15, 16, 17]],

        [[18, 19, 20],
         [21, 22, 23],
         [24, 25, 26]]])
>>> torch.permute(a,(2,0,1))
tensor([[[ 0,  3,  6],  #[[0,3,6],[9,12,15],[18,21,24]] 是一张图上的Red信息 
         [ 9, 12, 15],
         [18, 21, 24]],

        [[ 1,  4,  7],
         [10, 13, 16],
         [19, 22, 25]],

        [[ 2,  5,  8],
         [11, 14, 17],
         [20, 23, 26]]])

3.5.2 torch.reshape

10.torch.reshape(input, shape) → Tensor

和np一样

3.5.3 torch.squeeze

11.torch.squeeze(input, dim=None) → Tensor

和np一样,都是删掉1维(删个括号),不指定维度删掉所有维度为1的那一维度,指定只删那一维,若指定的时候那一维不为1,则相当于没有操作(与np不同,np会直接报错)

Returns a tensor with all specified dimensions of input of size 1 removed.

For example, if input is of shape: (A×1×B×C×1×D) then the input.squeeze() will be of shape: (A×B×C×D).

When dim is given, a squeeze operation is done only in the given dimension(s). If input is of shape:(A×1×B), squeeze(input, 0) leaves the tensor unchanged, but squeeze(input, 1) will squeeze the tensor to the shape(A×B).

3.5.4 torch.unsqueeze

12.torch.unsqueeze(input, dim) → Tensor

对应np.expand_dims,增加维度用的。

在指定位置插入大小为1的维度

Returns a new tensor with a dimension of size one inserted at the specified position.

The returned tensor shares the same underlying data with this tensor.

A dim value within the range [-input.dim() - 1, input.dim() + 1) can be used. Negative dim will correspond to unsqueeze() applied at dim = dim + input.dim() + 1.

  • 19
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

旧音541

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值