sparse_softmax_cross_entropy_with_logits

本文详细介绍了TensorFlow中的`sparse_softmax_cross_entropy_with_logits`函数,包括其工作原理、使用方法和在损失计算中的作用。通过实例解析,帮助读者掌握如何在模型训练中应用该函数。
摘要由CSDN通过智能技术生成

1、tf.nn.sparse_softmax_cross_entropy_with_logits


tensorflow help info:

tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.argmax(y_,1),logits=y)
#y_标签 y_.shape = [batch_size,num_class] tf.argmax(y_,1).shape = [batch_size]
#y预测值 y.shape = [batch_size,num_class]  
    A common use case is to have logits of shape `[batch_size, num_classes]` and
    labels of shape `[batch_size]`. But higher dimensions are supported.

Help on function sparse_softmax_cross_entropy_with_logits in module tensorflow.python.ops.nn_ops:

sparse_softmax_cross_entropy_with_logits(_sentinel=None, labels=None, logits=None, name=None)
    Computes sparse softmax cross entropy between `logits` and `labels`.
    
    Measures the probability error in discrete classification tasks in which the
    classes are mutually exclusive (each entry is in exactly one class).  For
    example, each CIFAR-10 image is labeled with one and only one label: an image
    can be a dog or a truck, but not both.
    
    **NOTE:**  For this operation, the probability of a given label is considered
    exclusive.  That is, soft classes are not allowed, and the `labels` vector
    must provide a single specific index for the true class for each row of
    `logits` (each minibatch entry).  For soft softmax classification with
    a probability distribution for each entry, see
    `softmax_cross_entropy_with_logits`.
    
    **WARNING:** This op expects unscaled logits, since it performs a `softmax`
    on `logits` internally for efficiency.  Do not call this op with the
    output of `softmax`, as it will produce incorrect results.
    
    A common use case is to have logits of shape `[batch_size, num_classes]` and
    labels of shape `[batch_size]`. But higher dimensions are supported.
    Args:
      _sentinel: Used to prevent positional parameters. Internal, do not use.
      labels: `Tensor` of shape `[d_0, d_1, ..., d_{r-1}]` (where `r` is rank of
        `labels` and result) and dtype `int32` or `int64`. Each entry in `labels`
        must be an index in `[0, num_classes)`. Other values will raise an
        exception when this op is run on CPU, and return `NaN` for corresponding
        loss and gradient rows on GPU.
      logits: Unscaled log probabilities of shape
        `[d_0, d_1, ..., d_{r-1}, num_classes]` and dtype `float32` or `float64`.
      name: A name for the operation (optional).
    
    Returns:
      A `Tensor` of the same shape as `labels` and of the same type as `logits`
      with the softmax cross entropy loss.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值