# KL散度 pytorch实现

8 篇文章 0 订阅
2 篇文章 0 订阅

## KL散度 KL Divergence

D K L D_{KL} 是衡量两个概率分布之间的差异程度。

D K L = ∑ x P ( x ) l o g P ( x ) Q ( x ) D_{KL} = \sum_xP(x)log\frac{P(x)}{Q(x)}

D K L = ∫ x P ( x ) l o g P ( x ) Q ( x ) D_{KL} = \int_xP(x)log\frac{P(x)}{Q(x)}

## pytorch 实现

torch.nn.functional.kl_div(input, target, size_average=None, reduce=None, reduction=‘mean’, log_target=False)

See KLDivLoss for details.

• Parameters

input – Tensor of arbitrary shape

target – Tensor of the same shape as input

size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True

reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True

reduction (string*,* optional) – Specifies the reduction to apply to the output: 'none' | 'batchmean' | 'sum' | 'mean'. 'none': no reduction will be applied 'batchmean': the sum of the output will be divided by the batchsize 'sum': the output will be summed 'mean': the output will be divided by the number of elements in the output Default: 'mean'

log_target (bool) – A flag indicating whether target is passed in the log space. It is recommended to pass certain distributions (like softmax) in the log space to avoid numerical issues caused by explicit log. Default: False

input与target是shape相同的tensor, 往往是 number * feature的大小，即从number个样本 计算出feature服从的emperical distribution。

size_average 和 reduce参数已经启用

• 0
点赞
• 0
评论
• 0
收藏
• 一键三连
• 扫一扫，分享海报

12-12 1911
12-19 8339
07-05 1万+
10-06 7463
03-26 7万+
06-28 1万+
06-07 2万+
04-11 2164
01-03 662
03-28 3519
03-29 1万+