pytorch 分类网络使用

28 篇文章 1 订阅
16 篇文章 0 订阅

1.基本框架

https://github.com/chenyuntc/pytorch-best-practice

2.tnt记录

https://github.com/pytorch/tnt

3.loss函数

import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.autograd as autograd
loss = nn.CrossEntropyLoss()
input = autograd.Variable(torch.randn(3, 5), requires_grad=True)
target = autograd.Variable(torch.LongTensor(3).random_(5))
output = loss(input, target)
output.backward()
output
Variable containing:
 1.5625
[torch.FloatTensor of size 1]
target.shape
torch.Size([3])
target
Variable containing:
 1
 2
 0
[torch.LongTensor of size 3]
input
Variable containing:
-1.6205 -0.8016  0.2011 -0.2135 -0.5659
-1.1081 -0.3051  1.7567  0.7664  0.9951
-0.1458  1.0557  0.5628 -0.2360 -2.0300
[torch.FloatTensor of size 3x5]

CrossEntropyLoss参数以及使用

Args:
    weight (Tensor, optional): a manual rescaling weight given to each class.
       If given, has to be a Tensor of size "C"
    size_average (bool, optional): By default, the losses are averaged over observations for each minibatch.
       However, if the field size_average is set to ``False``, the losses are
       instead summed for each minibatch. Ignored if reduce is ``False``.
    ignore_index (int, optional): Specifies a target value that is ignored
        and does not contribute to the input gradient. When size_average is
        ``True``, the loss is averaged over non-ignored targets.
    reduce (bool, optional): By default, the losses are averaged or summed over
        observations for each minibatch depending on size_average. When reduce
        is ``False``, returns a loss per batch element instead and ignores
        size_average. Default: ``True``

Shape:
    - Input: :math:`(N, C)` where `C = number of classes`
    - Target: :math:`(N)` where each value is `0 <= targets[i] <= C-1`
    - Output: scalar. If reduce is ``False``, then :math:`(N)` instead.

Examples::

    >>> loss = nn.CrossEntropyLoss()
    >>> input = autograd.Variable(torch.randn(3, 5), requires_grad=True)
    >>> target = autograd.Variable(torch.LongTensor(3).random_(5))
    >>> output = loss(input, target)
    >>> output.backward()

4.官网例子

https://github.com/pytorch/vision

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值