TensorFlow中常用的损失函数

TensorFlow的tf.nn模块里的损失函数:
tf.nn.softmax_cross_entropy_with_logits_v2()
import tensorflow as tf

labels = [[0.2,0.3,0.5],
          [0.1,0.6,0.3]]
logits = [[2,0.5,1],
          [0.1,1,3]]
logits_scaled = tf.nn.softmax(logits)

result1 = tf.nn.softmax_cross_entropy_with_logits_v2(labels=labels, logits=logits)  # 结果正确
result2 = -tf.reduce_sum(labels*tf.log(logits_scaled),1)   # 根据交叉熵损失函数的定义计算,结果正确
result3 = tf.nn.softmax_cross_entropy_with_logits_v2(labels=labels, logits=logits_scaled)  # 结果不合预期,用法不对,输入不应当是尺度化之后的概率

with tf.Session() as sess:
    print(sess.run(result1))
    print(sess.run(result2))
    print(sess.run(result3))
'''
[1.4143689 1.6642545]
[1.4143689 1.6642545]
[1.1718578 1.1757141]
'''

tf.nn.sparse_softmax_cross_entropy_with_logits()
import tensorflow as tf

'''
    labels:labels[i]必须是[0, num_classes)的索引或者-1.如果是-1,则相应的损失为0,不用考虑 logits[i]的值。
    logits:无尺度化的log概率
'''
labels = [0,1,9] #表示有三个样本的标签,第一个标签groundtruth是第0类,第二个标签groundtruth是第1类,,第三个标签groundtruth是第9类,
logits = [[20.90,10.01,1.01,2.01,3.01,5.01,6.01,4.01,7.01,9.02,],  # 预测值(注意不是概率)中第一个最大
[17.2,28.91,3.21,3.21,7.01,9.01,10.01,5.21,9.01,7.71],             # 预测值(注意不是概率)中第二个最大
[10.0,0.0,40.0,50.0,0.0,0.0,0.0,0.0,0.0,100.0]]                    # 预测值(注意不是概率)中第十个最大

result = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits,)

with tf.Session() as sess:
    print(sess.run(result))
    '''
    [2.6940936e-05 8.2254073e-06 0.0000000e+00]
    '''

tf.nn.sigmoid_cross_entropy_with_logits()

tf.nn.weighted_cross_entropy_with_logits()

tf.nn.sampled_softmax_loss()

tf.nn.scale_regularization_loss()

TensorFlow的tf.lossse模块里的损失函数计算方法(包括相关辅助函数):

absolute_difference(…): Adds an Absolute Difference loss to the training procedure.

add_loss(…): Adds a externally defined loss to the collection of losses.

compute_weighted_loss(…): Computes the weighted loss.

cosine_distance(…): Adds a cosine-distance loss to the training procedure. (deprecated arguments)

get_losses(…): Gets the list of losses from the loss_collection.

get_regularization_loss(…): Gets the total regularization loss.

get_regularization_losses(…): Gets the list of regularization losses.

get_total_loss(…): Returns a tensor whose value represents the total loss.

hinge_loss(…): Adds a hinge loss to the training procedure.

huber_loss(…): Adds a Huber Loss term to the training procedure.

log_loss(…): Adds a Log Loss term to the training procedure.

mean_pairwise_squared_error(…): Adds a pairwise-errors-squared loss to the training procedure.

mean_squared_error(…): Adds a Sum-of-Squares loss to the training procedure.

sigmoid_cross_entropy(…): Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits.

softmax_cross_entropy(…): Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.

sparse_softmax_cross_entropy(…): Cross-entropy loss using tf.nn.sparse_softmax_cross_entropy_with_logits.

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值