class torch.autograd.Function 简介

参考链接: torch.autograd.Function
参考链接: Extending PyTorch
参考链接: 定义torch.autograd.Function的子类,自己定义某些操作,且定义反向求导函数

在这里插入图片描述

原文以及翻译:

Function 函数
torch.autograd.Function
Records operation history and defines formulas for differentiating ops.
记录操作历史,并且定义求导操作的公式.
Every operation performed on Tensor s creates a new function object, 
that performs the computation, and records that it happened. 
The history is retained in the form of a DAG of functions, 
with edges denoting data dependencies (input <- output). 
Then, when backward is called, the graph is processed in the topological ordering, 
by calling backward() methods of each Function object, 
and passing returned gradients on to next Function s.
作用在每个Tensor上的操作都会新创建一个新的function对象,
这些function对象执行计算,并且记录计算的发生.
这些历史以函数functions的有向无环图的形式保留下来.
有向无环图的边表示数据的依赖关系(输入 <- 输出)(input <- output).
之后,当反向传播backward被调用时,计算图会以拓扑顺序被处理执行.
这个处理过程是通过调用每个Function对象的backward()方法来完成的,
并且依次将返回得到的梯度传递到下一个Function对象.
Normally, the only way users interact with functions is by creating 
subclasses and defining new operations. This is a recommended 
way of extending torch.autograd.
一般而言,用户和functions交互的唯一方式是创建一个子类,并定义新的操作.
这也是扩展torch.autograd推荐使用的方式.
Each function object is meant to be used only once (in the forward pass).
每个function对象只会被使用一次(在前向传播过程中).

Examples:例子

>>> class Exp(Function):
>>>
>>>     @staticmethod
>>>     def forward(ctx, i):
>>>         result = i.exp()
>>>         ctx.save_for_backward(result)
>>>         return result
>>>
>>>     @staticmethod
>>>     def backward(ctx, grad_output):
>>>         result, = ctx.saved_tensors
>>>         return grad_output * result
static backward(ctx, *grad_outputs)
	Defines a formula for differentiating the operation.
	定义求导操作的公式.
	This function is to be overridden by all subclasses.
	这个函数将会被所有子类所重写.
	It must accept a context ctx as the first argument, 
	followed by as many outputs did forward() return, 
	and it should return as many tensors, as there were inputs to forward(). 
	Each argument is the gradient w.r.t the given output, 
	and each returned value should be the gradient w.r.t. the corresponding input.
	它必须接收一个上下文ctx作为第一个参数,
	然后接收一定数量的参数,这个数量就是forward()函数返回的所有参数的数量,
	而且它必须返回一定数量的参数,这个数量就是forward()函数接收的所有张量tensor的数量.
	传入的每个参数都是相对于给定输出的梯度.
	并且每个返回的值都应该是相应输入的梯度.
	The context can be used to retrieve tensors saved during the forward pass. 
	It also has an attribute ctx.needs_input_grad as a tuple of booleans 
	representing whether each input needs gradient. E.g., 
	backward() will have ctx.needs_input_grad[0] = True if the first 
	input to forward() needs gradient computated w.r.t. the output.
	我们可以使用上下文context来获取在前向传递过程中保存的张量.
	它同时具有属性ctx.needs_input_grad,他是一个元素为布尔类型的元组,
	布尔值表示每个输入数据是否需要梯度.举个例子,
	如果forward()函数的第一个输入数据需要根据输出计算梯度,
	那么backward()中的属性ctx.needs_input_grad[0] = True.
	

static forward(ctx, *args, **kwargs)
	Performs the operation.
	执行操作.
	This function is to be overridden by all subclasses.
	该函数将会被所有子类所重写.
	It must accept a context ctx as the first argument, 
	followed by any number of arguments (tensors or other types).
	它必需接收一个上下文ctx作为第一个参数,
	然后可以接着接收任意数量的参数(张量或者其他类型)
	The context can be used to store tensors that can be then retrieved during the backward pass.
	上下文可以被用来保存张量,这样就可以在后向传递的过程中获取这些张量.

在这里插入图片描述

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值