pytorch 神经网络
torch.sigmoid(a)
relu 梯度保持不变 torch.relu(a)
from torch.nn import functional as F
F.relu()
Typical Loss : Mean Squared Error Cross Entropy Loss
MSE
torch.autograd.grad()
变量需要梯度信息 torch.requires_grad_
x = torch.ones(1)
w = torch.full([1],2)
mse = F.mse_loss(torch.ones(1), x*w)
w.required_grad_()
mse=F.mse_loss(torch.ones(1),x*w)
torch.autograd.grad(mse,[w]) =>(tensor([2.),)
mse.backward()
w.grad=>tensor([2.])
loss.backward
softmax作用 原来概率大的结果放的更大,原来小的压缩在一个小的空间
a = torch.rand(3)
a.requires_grad_() => tensor([xxx],requires_grad=True)
p = F.softmax(a,dim=0)
p.backward()
p = F.softmax(a,dim=0)
torch.autograd.grad(p[1],[a],retrin_graph=True)
for idx, data in enumerate(train_loader):
xs, ys = data
pred1 = model1(xs)
pred2 = model2(xs)