pytorch基础(附代码)

1.基本数据类型:

pytorch只是一个GPU加速库,不支持String,使用编码表示。

不同维度数据类型不同。

dim=0:loss表示;dim=1: bias/linear input;dim=2:linear input batch;

dim=3: RNN input batch; dim=4: 图片[b,c,h,w].

#要有文件声明import torch,随机生成一个服从N(0,1)的(2,3)矩阵,查询类型并检验
import torch
a = torch.randn(2,3)
a.type()
type(a)

#数据原本是存储在cpu上的,使用a.cuda()可将数据转存到GPU上
isinstance(a,torch.FloatTensor)
isinstance(a,torch.cuda.FloatTensor)

#不同维度数据类型不同。
#dim=0:标量,1.3是0维,但是[1.3]是1维。在表示loss时使用标量。
torch.tensor(1.)
torch.tensor(1.3)
b = torch.tensor(2.2)
b.shape
len(b.shape)
b.size()
torch.tensor([2.2])
torch.FloatTensor(2)

#dim=1: 向量,.tensor接收数据内容,从numpy引入,使用from_numpy从a引入,但是数据类型不同,在表示bias/linear input 时使用。
import numpy as np
a = np.ones(2)
torch.tensor([2.2])

torch.FloatTensor(2)

torch.from_numpy(a)
a = torch.randn(2,3)

#dim=2:linear input batch;dim、size/shape、tensor分清楚
#dim=3: RNN input batch;
#dim=4: 图片[b,c,h,w].  2张3通道的28*28大小的图片
a
Out[47]: 
tensor([[ 2.4067, -1.0651, -1.2691],
        [ 1.1589,  0.0887, -1.4517]])

a.shape
Out[48]: torch.Size([2, 3])

a.size(0)
Out[49]: 2

a.size(1)
Out[50]: 3

a.shape[1]
Out[51]: 3

a = torch.randn(1,2,3)

a
Out[53]: 
tensor([[[ 1.2273,  0.3963, -0.4747],
         [ 0.7900,  0.7536,  1.0597]]])

a.size(0)
Out[54]: 1

a.shape
Out[55]: torch.Size([1, 2, 3])

list(a.shape)
Out[56]: [1, 2, 3]

a[0]
Out[57]: 
tensor([[ 1.2273,  0.3963, -0.4747],
        [ 0.7900,  0.7536,  1.0597]])

a = torch.rand(2,3,28,28)

#求取size/内存大小/维度
a.shape
Out[60]: torch.Size([2, 3, 28, 28])

数据类型:需要注意数据的存取位置,Pytroch 涉及到 Variable,Tensor 和 Numpy 间的转换比较多,还会涉及到 cuda 到 cpu的转换。

2.创建Tensor:从numpy、list中取

随机:torch.rand(2,3)  torch.randint(1,10,3)   torch.rand_like(a)

torch.arange(0,10)
Out[67]: tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])

torch.arange(0,10,2)
Out[68]: tensor([0, 2, 4, 6, 8])

torch.range(0,10)
__main__:1: UserWarning: torch.range is deprecated in favor of torch.arange and will be removed in 0.5. Note that arange generates values in [start; end), not [start; end].
Out[69]: tensor([ 0.,  1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.])

torch.linspace(0,10,steps=10)
Out[70]: 
tensor([ 0.0000,  1.1111,  2.2222,  3.3333,  4.4444,  5.5556,  6.6667,  7.7778,
         8.8889, 10.0000])

torch.linspace(0,10,steps=11)
Out[71]: tensor([ 0.,  1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.])

torch.logspace(0,1,steps=10)
Out[72]: 
tensor([ 1.0000,  1.2915,  1.6681,  2.1544,  2.7826,  3.5938,  4.6416,  5.9948,
         7.7426, 10.0000])

torch.logspace(0,-1,steps=10)
Out[73]: 
tensor([1.0000, 0.7743, 0.5995, 0.4642, 0.3594, 0.2783, 0.2154, 0.1668, 0.1292,
        0.1000])
		
		
		torch.randperm(10)
Out[74]: tensor([6, 5, 3, 7, 1, 0, 4, 2, 9, 8])

初始化:torch.empty()    默认FloatTensor,增强学习一般是DoubleTensor

正态分布:torch.randn(3,3)   torch.normal()    torch.full([2,3],7)

随机打散randperm=tensorflow中的shuffle

3.索引和切片

a[:2,1:,:,:]   ::2是从头开始到2,不包含2, :即全取, 1:是从1开始至末尾, 0:28:2是隔行采样,定义开始和结束,不包含结束。

a.index_select(0,torch.tensor([0,2])).shape

torch.masked_select()  mask = x.ge(0.5)是掩码  

a = torch.rand(4,3,28,28)

a[0].shape
Out[76]: torch.Size([3, 28, 28])

a[0,0].shape
Out[77]: torch.Size([28, 28])

a[0,0,2,4].shape
Out[78]: torch.Size([])

a[0,0,2,4].shape
Out[79]: torch.Size([])

a.shape
Out[80]: torch.Size([4, 3, 28, 28])

a[:2].shape
Out[81]: torch.Size([2, 3, 28, 28])

a[:2,:1,:,:].shape
Out[82]: torch.Size([2, 1, 28, 28])

a[:2,1:,:,:].shape
Out[83]: torch.Size([2, 2, 28, 28])

a[:2,-1:,:,:].shape
Out[84]: torch.Size([2, 1, 28, 28])

a[:,:,0:28:2,0:28:2].shape
Out[85]: torch.Size([4, 3, 14, 14])

a.index_select(0,torch.tensor([0,2])).shape
Out[86]: torch.Size([2, 3, 28, 28])

a.index_select(2,torch.tensor([0,2])).shape
Out[87]: torch.Size([4, 3, 2, 28])

a.index_select(2,torch.arange(8)).shape
Out[88]: torch.Size([4, 3, 8, 28])

a.index_select(2,torch.arange(28)).shape
Out[89]: torch.Size([4, 3, 28, 28])

a[...].shape
Out[90]: torch.Size([4, 3, 28, 28])

a[:,1,...].shape
Out[91]: torch.Size([4, 28, 28])

a[...,:2].shape
Out[92]: torch.Size([4, 3, 28, 2])

4.维度变换

view/reshape改变维度   squeeze/unsqueeze挤压/展开   

x = torch.randn(2,3)

x
Out[94]: 
tensor([[ 0.5830, -1.7913,  0.6734],
        [-0.2519,  0.8256, -0.5367]])

mask = x.ge(0.65)

mask
Out[96]: 
tensor([[0, 0, 1],
        [0, 1, 0]], dtype=torch.uint8)

torch.masked_select(x,mask)
Out[97]: tensor([0.6734, 0.8256])

torch.masked_select(x,mask).shape
Out[98]: torch.Size([2])

a = torch.rand(4,1,28,28)

a.shape
Out[100]: torch.Size([4, 1, 28, 28])

a.view(4,1*28*28)
Out[101]: 
tensor([[0.6230, 0.2053, 0.6135,  ..., 0.8385, 0.7093, 0.6174],
        [0.3168, 0.2403, 0.6323,  ..., 0.3382, 0.6624, 0.4497],
        [0.2216, 0.1348, 0.4973,  ..., 0.2912, 0.3401, 0.3038],
        [0.3945, 0.2593, 0.6659,  ..., 0.7032, 0.7229, 0.7585]])

a.view(4,1*28*28).shape
Out[102]: torch.Size([4, 784])

a.view(4*1*28,28).shape
Out[103]: torch.Size([112, 28])

a.view(4*1,28,28).shape
Out[104]: torch.Size([4, 28, 28])

a.shape
Out[105]: torch.Size([4, 1, 28, 28])

a.unsqueeze(0).shape
Out[106]: torch.Size([1, 4, 1, 28, 28])

a.unsqueeze(-1).shape
Out[107]: torch.Size([4, 1, 28, 28, 1])

a.unsqueeze(4).shape
Out[108]: torch.Size([4, 1, 28, 28, 1])

a.unsqueeze(-4).shape
Out[109]: torch.Size([4, 1, 1, 28, 28])

a.unsqueeze(-5).shape
Out[110]: torch.Size([1, 4, 1, 28, 28])

b = torch.tensor([1.3,2.2])

b.unsqueeze(-1)
Out[112]: 
tensor([[1.3000],
        [2.2000]])

b.unsqueeze(-1).shape
Out[113]: torch.Size([2, 1])

b.unsqueeze(0)
Out[114]: tensor([[1.3000, 2.2000]])

b.unsqueeze(0).shape
Out[115]: torch.Size([1, 2])

a.shape
Out[116]: torch.Size([4, 1, 28, 28])

a.squeeze().shape
Out[117]: torch.Size([4, 28, 28])

a.squeeze(0).shape
Out[118]: torch.Size([4, 1, 28, 28])

a.squeeze(1).shape
Out[119]: torch.Size([4, 28, 28])

a.shape
Out[120]: torch.Size([4, 1, 28, 28])

a.expand(4,32,28,28).shape
Out[121]: torch.Size([4, 32, 28, 28])

a.expand(4,-1,28,28).shape
Out[122]: torch.Size([4, 1, 28, 28])

a.shape
Out[123]: torch.Size([4, 1, 28, 28])

a.repeat(4,1,28,28).shape
Out[124]: torch.Size([16, 1, 784, 784])

transpose/permute维度交换/排列    expand/repeat扩展

5.broadcast自动扩展:

维度扩展且不考虑数据,自动根据情况变换维度,小维度指定,大维度随意。

6.拼接于拆分:

拼接:cat(其他维度需相同)、stack(创建新维度)

拆分:spilt(按长度)、chunk(按数量)

7.数学运算

加减乘除:add/+、sub/-、mul/matmul/mm(2d)/@/*、div//

次方:pow(array,2/3/4/0.5....)  a**(2) 平方  a**(0.5)开方    exp()   log()

向前取整/向后取整/取整数/取小数/四舍五入

a.floor()  a.ceil()  a.trunc()  a.frac()  a.round()

clamp 梯度裁剪    w.grad.norm(2) 2范数

8.统计属性

向量和矩阵的范数norm

均值mean  求和sum  累乘prod  最大max  最小 min  

argmax   argmin(返回索引,默认打平,需指定维度dim=1)

第k小 kthvalue     topk得到top-5、top-1的ACC

torch.eq(a,b)  >  >=   <  <=   !=  ==

9.高阶属性

where   torch.where(condition,x,y)->Tensor

grather   torch.gather(input,dim,index,out=None)>Tensor

a = torch.rand(4,3,28,28)

b = torch.rand(7,3,28,28)

torch.cat([a,b],dim=0).shape
Out[174]: torch.Size([11, 3, 28, 28])


a = torch.rand(4,28)

b = torch.rand(4,28)

torch.stack([a,b],dim=0).shape
Out[179]: torch.Size([2, 4, 28])

b = torch.rand(32,8)

a = torch.rand(32,8)

a.shape
Out[187]: torch.Size([32, 8])

c = torch.stack([a,b],dim=2)

c.shape
Out[189]: torch.Size([32, 8, 2])

c = torch.stack([a,b],dim=0)

c.shape
Out[191]: torch.Size([2, 32, 8])

aa,bb = c.split([1,1],dim = 0)

aa.shape,bb.shape
Out[193]: (torch.Size([1, 32, 8]), torch.Size([1, 32, 8]))

aa,bb = c.split(1,dim = 0)

aa.shape,bb.shape
Out[195]: (torch.Size([1, 32, 8]), torch.Size([1, 32, 8]))

a = torch.rand(2,3)

a
Out[198]: 
tensor([[0.8050, 0.2007, 0.8133],
        [0.9259, 0.9294, 0.5924]])

cond = torch.rand(2,3)

a=torch.zeros(2,3)

b=torch.ones(2,3)

torch.where(cond>0.8,a,b)
Out[202]: 
tensor([[1., 0., 0.],
        [1., 1., 1.]])

cond
Out[203]: 
tensor([[0.7988, 0.9074, 0.9060],
        [0.1050, 0.2191, 0.3774]])

a=torch.zeros(2,3)

a
Out[205]: 
tensor([[0., 0., 0.],
        [0., 0., 0.]])

b=torch.ones(2,3)

b
Out[207]: 
tensor([[1., 1., 1.],
        [1., 1., 1.]])

torch.where(cond>0.8,a,b)
Out[208]: 
tensor([[1., 0., 0.],
        [1., 1., 1.]])
		
idx = prob.topk(dim = 1,k = 3)

idx
Out[212]: 
torch.return_types.topk(
values=tensor([[2.2290, 1.6039, 0.6530],
        [0.7446, 0.5598, 0.5175],
        [1.4924, 0.4901, 0.2325],
        [0.7950, 0.5144, 0.2997]]),
indices=tensor([[8, 5, 4],
        [8, 7, 0],
        [5, 3, 1],
        [2, 0, 3]]))

idx = idx[1]

idx
Out[214]: 
tensor([[8, 5, 4],
        [8, 7, 0],
        [5, 3, 1],
        [2, 0, 3]])
		
		
label = torch.arange(10)+100

label
Out[217]: tensor([100, 101, 102, 103, 104, 105, 106, 107, 108, 109])

torch.gather(label.expand(4,10),dim=1,index=idx.long())

Out[218]: 
tensor([[108, 105, 104],
        [108, 107, 100],
        [105, 103, 101],
        [102, 100, 103]])

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值