# Tensorflow 损失函数（loss function）及自定义损失函数（二）

（一）tensorflow内置的四个损失函数
（二）其他损失函数
（三）自定义损失函数

Tensorlayer封装了很多的已经写好的代码，同时作为一个开源项目，也公布了很多的代码片段，我们这就来看看，除了tensorflow内置的四个损失函数以外还有什么其他的损失函数吧。

• 均方差loss

def mean_squared_error(output, target, is_mean=False, name="mean_squared_error"):
"""Return the TensorFlow expression of mean-square-error (L2) of two batch of data.

Parameters
----------
output : Tensor
2D, 3D or 4D tensor i.e. [batch_size, n_feature], [batch_size, height, width] or [batch_size, height, width, channel].
target : Tensor
The target distribution, format the same with output.
is_mean : boolean
Whether compute the mean or sum for each example.
- If True, use tf.reduce_mean to compute the loss between one target and predict data.
- If False, use tf.reduce_sum (default).

References
------------
- Wiki Mean Squared Error <https://en.wikipedia.org/wiki/Mean_squared_error>__

"""
with tf.name_scope(name):
if output.get_shape().ndims == 2:  # [batch_size, n_feature]
if is_mean:
mse = tf.reduce_mean(tf.reduce_mean(tf.squared_difference(output, target), 1))
else:
mse = tf.reduce_mean(tf.reduce_sum(tf.squared_difference(output, target), 1))
elif output.get_shape().ndims == 3:  # [batch_size, w, h]
if is_mean:
mse = tf.reduce_mean(tf.reduce_mean(tf.squared_difference(output, target), [1, 2]))
else:
mse = tf.reduce_mean(tf.reduce_sum(tf.squared_difference(output, target), [1, 2]))
elif output.get_shape().ndims == 4:  # [batch_size, w, h, c]
if is_mean:
mse = tf.reduce_mean(tf.reduce_mean(tf.squared_difference(output, target), [1, 2, 3]))
else:
mse = tf.reduce_mean(tf.reduce_sum(tf.squared_difference(output, target), [1, 2, 3]))
else:
raise Exception("Unknow dimension")
return mse

tf.reduce_mean(tf.reduce_sum(tf.squared_difference(output, target), [1, 2, 3, 4]))

• Dice coefficient 损失函数

Dice coefficient是常见的评价分割效果的方法之一，同样的也可以作为损失函数衡量分割的结果和标签之间的差距。同样的我们这里展示Tensorlayer的实现方法：

def dice_coe(output, target, loss_type='jaccard', axis=(1, 2, 3), smooth=1e-5):
"""Soft dice (Sørensen or Jaccard) coefficient for comparing the similarity
of two batch of data, usually be used for binary image segmentation
i.e. labels are binary. The coefficient between 0 to 1, 1 means totally match.

Parameters
-----------
output : Tensor
A distribution with shape: [batch_size, ....], (any dimensions).
target : Tensor
The target distribution, format the same with output.
loss_type : str
jaccard or sorensen, default is jaccard.
axis : tuple of int
All dimensions are reduced, default [1,2,3].
smooth : float
This small value will be added to the numerator and denominator.
- If both output and target are empty, it makes sure dice is 1.
- If either output or target are empty (all pixels are background), dice = smooth/(small_value + smooth), then if smooth is very small, dice close to 0 (even the image values lower than the threshold), so in this case, higher smooth can have a higher dice.

Examples
---------
>>> outputs = tl.act.pixel_wise_softmax(network.outputs)
>>> dice_loss = 1 - tl.cost.dice_coe(outputs, y_)

References
-----------
- Wiki-Dice <https://en.wikipedia.org/wiki/Sørensen–Dice_coefficient>`__

"""
inse = tf.reduce_sum(output * target, axis=axis)
if loss_type == 'jaccard':
l = tf.reduce_sum(output * output, axis=axis)
r = tf.reduce_sum(target * target, axis=axis)
elif loss_type == 'sorensen':
l = tf.reduce_sum(output, axis=axis)
r = tf.reduce_sum(target, axis=axis)
else:
raise Exception("Unknow loss_type")
dice = (2. * inse + smooth) / (l + r + smooth)
dice = tf.reduce_mean(dice)
return dice

06-10

02-03 2570

10-27 1万+

10-28 1301

08-14 2万+

07-06 1万+

07-15 162

07-02 2362

#### 【Pytorch】如何自定义损失函数（Loss Function）

©️2020 CSDN 皮肤主题: 技术黑板 设计师: CSDN官方博客

1.余额是钱包充值的虚拟货币，按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载，可以购买VIP、C币套餐、付费专栏及课程。