pytorch教程六(torch.nn.functonal模块)

一.Convolution 函数

torch.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)

torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)

torch.nn.functional.conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)

对几个输入平面组成的输入信号应用1D,2D,3D卷积。

参数: - input – 输入张量的形状 (minibatch x in_channels x iT x iH x iW) - weight – 过滤器张量的形状 (out_channels, in_channels, kT, kH, kW) - bias – 可选偏置张量的形状 (out_channels) - stride– 卷积核的步长,可以是单个数字或一个元组 (sh x sw)。默认为1 - padding – 输入上隐含零填充。可以是单个数字或元组。 默认值:0

torch.nn.functional.conv_transpose1d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1)

torch.nn.functional.conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1)

torch.nn.functional.conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1)

在由几个输入平面组成的输入图像上应用一维,二维,三维转置卷积,有时也称为“去卷积”。

参数: - input – 输入张量的形状 (minibatch x in_channels x iT x iH x iW) - weight – 过滤器的形状 (in_channels x out_channels x kH x kW) - bias – 可选偏置的形状 (out_channels) - stride – 卷积核的步长,可以是单个数字或一个元组 (sh x sw)。默认: 1 - padding – 输入上隐含零填充。可以是单个数字或元组。 (padh x padw)。默认: 0

二.Pooling函数

torch.nn.functional.avg_pool1d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)

torch.nn.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)

torch.nn.functional.avg_pool3d(input, kernel_size, stride=None)

torch.nn.functional.max_pool1d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)

torch.nn.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)

torch.nn.functional.max_pool3d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)

torch.nn.functional.max_unpool1d(input, indices, kernel_size, stride=None, padding=0, output_size=None)

torch.nn.functional.max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None)

torch.nn.functional.max_unpool3d(input, indices, kernel_size, stride=None, padding=0, output_size=None)

torch.nn.functional.lp_pool2d(input, norm_type, kernel_size, stride=None, ceil_mode=False)

torch.nn.functional.adaptive_max_pool1d(input, output_size, return_indices=False)

torch.nn.functional.adaptive_avg_pool2d(input, output_size)

三.非线性激活函数

torch.nn.functional.threshold(input, threshold, value, inplace=False)

torch.nn.functional.relu(input, inplace=False)

torch.nn.functional.hardtanh(input, min_val=-1.0, max_val=1.0, inplace=False)

torch.nn.functional.relu6(input, inplace=False)

torch.nn.functional.elu(input, alpha=1.0, inplace=False)

torch.nn.functional.leaky_relu(input, negative_slope=0.01, inplace=False)

torch.nn.functional.prelu(input, weight)

torch.nn.functional.rrelu(input, lower=0.125, upper=0.3333333333333333, training=False, inplace=False)

torch.nn.functional.logsigmoid(input)

torch.nn.functional.hardshrink(input, lambd=0.5)

torch.nn.functional.tanhshrink(input)

torch.nn.functional.softsign(input)

torch.nn.functional.softplus(input, beta=1, threshold=20)

torch.nn.functional.softmin(input)

torch.nn.functional.softmax(input)

torch.nn.functional.softshrink(input, lambd=0.5)

torch.nn.functional.log_softmax(input)

torch.nn.functional.tanh(input)

torch.nn.functional.sigmoid(input)

四.Normalization函数

torch.nn.functional.batch_norm(input,running_mean,running_var,weight=None,bias=None,training=False,momentum=0.1, eps=1e-05)

五.线性函数

torch.nn.functional.linear(input, weight, bias=None)

六.Dropout函数

torch.nn.functional.dropout(input, p=0.5, training=False, inplace=False)

七. 距离函数(Distance functions)

torch.nn.functional.pairwise_distance(x1, x2, p=2, eps=1e-06)

八.损失函数(Loss functions)

torch.nn.functional.nll_loss(input, target, weight=None, size_average=True)

torch.nn.functional.cross_entropy(input, target, weight=None, size_average=True)

torch.nn.functional.binary_cross_entropy(input, target, weight=None, size_average=True)

torch.nn.functional.smooth_l1_loss(input, target, size_average=True)

九.Vision functions

torch.nn.functional.pixel_shuffle(input, upscale_factor)[source]

将形状为[*, C*r^2, H, W]Tensor重新排列成形状为[C, H*r, W*r]的Tensor.

形参说明: - input (Variable) – 输入 - upscale_factor (int) – 增加空间分辨率的因子.

 

torch.nn.functional.pad(input, pad, mode='constant', value=0)[source]

填充Tensor.

目前为止,只支持2D3D填充. Currently only 2D and 3D padding supported. 当输入为4D Tensor的时候,pad应该是一个4元素的tuple (pad_l, pad_r, pad_t, pad_b ) ,当输入为5D Tensor的时候,pad应该是一个6元素的tuple (pleft, pright, ptop, pbottom, pfront, pback).

 

  • 3
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值