pytorch如何计算导数_如何使用PyTorch计算偏导数?

本文探讨了使用PyTorch计算偏导数时遇到的问题。当用神经网络拟合函数Y = 5*x1^4 + 3*x2^3 + 7*x1^2 + 9*x2 - 5并尝试获取dYdx1和dYdx2时,发现结果不正确。问题可能在于网络未正确拟合原始函数,导致内部表示实际建模了一个不同的函数。而直接使用autograd对原始函数进行计算能得到准确的偏导数。
摘要由CSDN通过智能技术生成

I want to use PyTorch to get the partial derivatives between output and input. Suppose I have a function Y = 5*x1^4 + 3*x2^3 + 7*x1^2 + 9*x2 - 5, and I train a network to replace this function, then I use autograd to calculate dYdx1, dYdx2:

net = torch.load('net_723.pkl')

x = torch.tensor([[1,-1]],requires_grad=True).type(torch.FloatTensor)

y = net(x)

grad_c = torch.autograd.grad(y,x,create_graph=True,retain_graph=True)[0]

Then I get a wrong derivative as:

>>>tensor([[ 7.5583, -5.3173]])

but when I use function to calculate, I get the right answer:

Y = 5*x[0,0]**4 + 3*x[0,1]**3 + 7*x[0,0]**2 + 9*x[0,1] - 5

grad_c = torch.autograd.grad(Y,x,create_graph=True,retain_graph=True)[0]

>>>tensor([[ 34., 18.]])

Why does this happen?

解决方案

A neural network is a universal function approximator. What that means is, that, for enough computational resources, training time, nodes, etc., you can approximate any function.

Without any further information on how you trained your network in the first example, I would suspect that your network simply does not fit properly to the underlying function, meaning that the internal representation of your network actually models a different function!

For the second code snippet, autmatic differentiation does give you the exact partial derivative. It does so via a different method, see another one of my answers on SO, on the topic of AutoDiff/Autograd specifically.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值