# KL散度 pytorch实现 机器学习 同时被 2 个专栏收录 8 篇文章 0 订阅

## KL散度 KL Divergence

D K L D_{KL} 是衡量两个概率分布之间的差异程度。

D K L = ∑ x P ( x ) l o g P ( x ) Q ( x ) D_{KL} = \sum_xP(x)log\frac{P(x)}{Q(x)}

D K L = ∫ x P ( x ) l o g P ( x ) Q ( x ) D_{KL} = \int_xP(x)log\frac{P(x)}{Q(x)}

## pytorch 实现

torch.nn.functional.kl_div(input, target, size_average=None, reduce=None, reduction=‘mean’, log_target=False)

See KLDivLoss for details.

• Parameters

input – Tensor of arbitrary shape

target – Tensor of the same shape as input

size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True

reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True

reduction (string*,* optional) – Specifies the reduction to apply to the output: 'none' | 'batchmean' | 'sum' | 'mean'. 'none': no reduction will be applied 'batchmean': the sum of the output will be divided by the batchsize 'sum': the output will be summed 'mean': the output will be divided by the number of elements in the output Default: 'mean'

log_target (bool) – A flag indicating whether target is passed in the log space. It is recommended to pass certain distributions (like softmax) in the log space to avoid numerical issues caused by explicit log. Default: False

input与target是shape相同的tensor, 往往是 number * feature的大小，即从number个样本 计算出feature服从的emperical distribution。

size_average 和 reduce参数已经启用

05-21 9955  12-12 1911
12-19 8339
11-23 209
07-05 1万+
07-21 4348
10-06 7463
05-16 8309
03-20 2487
08-19 3236
01-14 63
01-03 1287
03-26 7万+
06-28 1万+
06-07 2万+
04-11 2164
01-03 662
03-28 3519
03-29 1万+ 点击重新获取   扫码支付  余额充值