Pytorch 基础操作第二部分

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档


龙良曲 相关笔记

Pytorch学习笔记

GPU

刘二大人

model = Net()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
inputs, target = data
inputs, target = inputs.to(device), target.to(device)

龙良曲

device = torch.device('cuda:0')
net = MLP().to(device)
criteon = nn.CrossEntropyLoss().to(device)
data, target = data.to(device), target.to(device)

S o f t m a x Softmax Softmax

S ( y i ) = e y i ∑ j e y j S(y_i)=\frac{e^{y_i}}{\sum_{j}{e^{y_j}}} S(yi)=jeyjeyi

D e r i v a t i v e Derivative Derivative

δ P i δ a j = p i ( 1 − p j )   \frac{\delta P_i}{\delta a_j}=p_i(1-p_j)\space δajδPi=pi(1pj)  i f   i = j if\space i = j if i=j

δ P i δ a j = − p i p j   \frac{\delta P_i}{\delta a_j} = -p_ip_j\space δajδPi=pipj  i f   i ! = j if\space i != j if i!=j

torch.manual_seed(123)
a = torch.rand(3)
a.requires_grad_()
p = F.softmax(a, dim = 0)
print(torch.autograd.grad(p[0],a,retain_graph = True))
print(torch.autograd.grad(p[1],a,retain_graph = True))
print(torch.autograd.grad(p[2],a))

nn.Relu v.s. F.relu()

class-style API
function-style API

x = torch.randn(1,784)
layer1 = nn.Linear(784,200)
x = layer1(x)
x = F.relu(x,inplace = True)
layer = nn.ReLU()
x = layer(x)

MLP

inherit from nn.Module
init layer in __init__
implement forward()

class MLP(nn.Module):
    def __init__(self):
        super(MLP,self).__init__()
        
        self.model = nn.Sequential(
            nn.Linear(784,200),
            nn.ReLU(inplace = True),
            nn.Linear(200,200),
            nn.ReLU(inplace = True),
            nn.Linear(200,10),
            nn.ReLU(inplace = True)
        )
        
    def forward(self,x):
        x = self.model(x)
        
        return x

准确率

test once per epoch

    test_loss = 0
    correct = 0
    for data, target in test_loader:
        data = data.view(-1, 28 * 28)
        data, target = data.to(device), target.cuda()
        logits = net(data)
        test_loss += criteon(logits, target).item()

        pred = logits.data.max(1)[1]
        correct += pred.eq(target.data).sum()

    test_loss /= len(test_loader.dataset)
    correct_rate = correct / len(test_loader.dataset)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值