pytorch 杂记

切片

a=torch.tensor([[1,2,3],[4,5,6],[7,8,9]])
print(a[[[1],[0]],[[1],[0]]])
tensor([[5],
        [1]])

broadcast

a=torch.tensor([[1],[2],[3]])
b=torch.tensor([1,2,3])
print(a+b)
tensor([[2, 3, 4],
        [3, 4, 5],
        [4, 5, 6]])

scatter

scatter_(dim, index, src) → Tensor

Writes all values from the tensor src into self at the indicesspecified in the index tensor. For each value in src, its outputindex is specified by its index in src for dimension!=dim and bythe corresponding value in index for dimension=dim.

For a 3-D tensor, self is updated as:

self[index[i][j][k]][j][k]=src[i][j][k]# if dim == 0
self[i][index[i][j][k]][k]=src[i][j][k]# if dim == 1
self[i][j][index[i][j][k]]=src[i][j][k]# if dim == 2

gather

函数torch.gather(input, dim, index, out=None) → Tensor
沿给定轴 dim ,将输入索引张量 index 指定位置的值进行聚合.
对一个 3 维张量,输出可以定义为:

out[i][j][k] = input[index[i][j][k]][j][k]  # if dim == 0
out[i][j][k] = input[i][index[i][j][k]][k]  # if dim == 1
out[i][j][k] = input[i][j][index[i][j][k]]  # if dim == 2

Conv2d

I n p u t : ( N , C i n , H i n , W i n ) Input: (N,Cin,Hin,Win) Input:(N,Cin,Hin,Win)

O u t p u t : ( N , C o u t , H o u t , W o u t ) Output: (N,Cout,Hout,Wout) Output:(N,Cout,Hout,Wout)
初始化

torch.nn.Conv2d(in_channels: int, 
out_channels: int, 
kernel_size: Union[T, Tuple[T, T]], stride: Union[T, Tuple[T, T]] = 1,
padding: Union[T, Tuple[T, T]] = 0, 
dilation: Union[T, Tuple[T, T]] = 1, 
groups: int = 1, bias: bool = True, 
padding_mode: str = 'zeros')

输出

Hout​={Hin​+2×padding[0]−dilation[0]×(kernel_size[0]−1)−1}/stride[0] ​+1

ConvTranspose2d

初始化

torch.nn.ConvTranspose2d(in_channels: int, 
out_channels: int, 
kernel_size: Union[T, Tuple[T, T]], 
stride: Union[T, Tuple[T, T]] = 1, 
padding: Union[T, Tuple[T, T]] = 0,
output_padding: Union[T, Tuple[T, T]] = 0,
 groups: int = 1, bias: bool = True, 
 dilation: int = 1, 
 padding_mode: str = 'zeros')

输入输出

I n p u t : ( N , C i n , H i n , W i n ) Input: (N,Cin,Hin,Win) Input:(N,Cin,Hin,Win)

O u t p u t : ( N , C o u t , H o u t , W o u t ) Output: (N,Cout,Hout,Wout) Output:(N,Cout,Hout,Wout)

Hout=(Hin−1)×stride[0]−2×padding[0]+dilation[0]×(kernel_size[0]−1)+output_padding[0]+1
Wout=(Win−1)×stride[1]−2×padding[1]+dilation[1]×(kernel_size[1]−1)+output_padding[1]+1

BatchNorm2d

在这里插入图片描述

torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1,
 affine=True, track_running_stats=True)
    Input: (N,C,H,W)(N, C, H, W)(N,C,H,W)

    Output: (N,C,H,W)(N, C, H, W)(N,C,H,W) (same shape as input)

conv1d

torch.nn.Conv1d(in_channels, out_channels, kernel_size, 
stride=1, padding=0, dilation=1, groups=1, bias=True, 
padding_mode='zeros', device=None, dtype=None)

out ⁡ ( N i , C out  j ) = bias ⁡ ( C out  j ) + ∑ k = 0 C i n − 1 weight ⁡ ( C out  j , k ) ⋆ input ⁡ ( N i , k ) \operatorname{out}\left(N_{i}, C_{\text {out }_{j}}\right)=\operatorname{bias}\left(C_{\text {out }_{j}}\right)+\sum_{k=0}^{C_{i n}-1} \operatorname{weight}\left(C_{\text {out }_{j}}, k\right) \star \operatorname{input}\left(N_{i}, k\right) out(Ni,Cout j)=bias(Cout j)+k=0Cin1weight(Cout j,k)input(Ni,k)

Input: ( N , C in  , L in  ) \left(N, C_{\text {in }}, L_{\text {in }}\right) (N,Cin ,Lin )
Output: ( N , C out  , L out  ) \left(N, C_{\text {out }}, L_{\text {out }}\right) (N,Cout ,Lout )

AdaptiveMaxPool1d

AdaptiveMaxPool1d 和 AdaptiveMaxPool2d一样,根据output_size的大小自动调整,kernel_size。然后调用max_pooling

torch.nn.functional.grid_sample(input, grid, mode=‘bilinear’, padding_mode=‘zeros’)

input : 输入tensor, shape为 [N, C, H_in, W_in]
grid: 一个field flow, shape为[N, H_out, W_out, 2],最后一个维度是每个grid(H_out_i, W_out_i)在input的哪个位置的邻域去采点。数值范围被归一化到[-1,1]。

input: ( N , C , H in  , W in  ) \left(N, C, H_{\text {in }}, W_{\text {in }}\right) (N,C,Hin ,Win )
grid : ( N , H out  , W out  , 2 ) \left(N, H_{\text {out }}, W_{\text {out }}, 2\right) (N,Hout ,Wout ,2)
output : ( N , C , H out  , W out  ) \left(N, C, H_{\text {out }}, W_{\text {out }}\right) (N,C,Hout ,Wout )

torch.randint

torch.randint(2,[3,3])
返回一个3x3矩阵,元素值为0或者1

einsum

计算矩阵所有元素之和

einsum('i,j', a)   # 等价于einsum('i,j->', a)
einsum('i,j,k', c)

计算矩阵的迹

einsum('ii', a)

获取矩阵对角线元素组成的向量

einsum('ii->i', a)

向量相乘得到矩阵

einsum('i,j->ij', x, y)

矩阵点积

einsum('ij,jk->ik', a, b)

矩阵对应元素相乘

einsum('ij,ij->ij', a, d)

矩阵的转置

einsum('ijk->ikj', c)
einsum('...jk->...kj', c)  # 两种形式等价

双线性运算

A = torch.randn(3,5,4)
l = torch.randn(2,5)
r = torch.randn(2,4)
torch.einsum('bn,anm,bm->ba', l, A, r)

einsum的主要思想是,如果下标相同,则认为在这两个或多个维度上索引是一一对应的。如果->右边没有出现某个下标,说明需要对该下标进行求和。

argpartition

np.argpartition(a, 4))
a是一个数组, 该命令的作用是,把a中第5大元素的index放在结果的第5位,前四位是a中比第5大元素小的元素的Index,

pad

如input.size=[3,3,4,2],pad=(1,1),则会对input第4个维度上padding,前方向和后方向各加1,结果的size为[3,3,4,4],
input.size=[3,3,4,2],pad=(0,1,2,1,3,3),则会对input第4,第3,第2维度上padding,第4维前方向加0,后方向加1,第3维前方向加2,后方向加1,第2维前方向加3,后方向加3,结果的size为[3,9,7,3].

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值