1.原论文地址:《Wasserstein GAN》
最初应用在生成模型上
2.原理学习:
3.优越性
-
Wasserstein距离的解释性:
- KL散度和JS散度都是基于信息论的度量,用于衡量两个概率分布之间的差异。它们在数学上的定义较为抽象,难以直观理解。
- 相比之下,Wasserstein距离的直观解释更为清晰。它可以被理解为在将一个概率分布转化为另一个概率分布的过程中所需的最小代价(例如运输成本)。
-
Wasserstein距离的稳健性:
- KL散度和JS散度在处理离散分布时可能存在数值不稳定问题,尤其是当两个分布有重叠部分时。
- Wasserstein距离相对而言更加稳健,尤其在处理高维数据或具有重叠分布的情况下表现更好。
-
Wasserstein距离的应用广泛性:
- 在机器学习、图像处理、自然语言处理等领域,Wasserstein距离被广泛应用于度量两个分布之间的相似性,如生成对抗网络(GAN)中的损失函数设计、文本数据的分布匹配等方面。
- KL散度和JS散度在某些情况下可能不够灵活,而Wasserstein距离能够更好地适应复杂的分布结构。
代码部分
import torch
import torch.nn as nn
# Adapted from https://github.com/gpeyre/SinkhornAutoDiff
class SinkhornDistance(nn.Module):
r"""
Given two empirical measures each with :math:`P_1` locations
:math:`x\in\mathbb{R}^{D_1}` and :math:`P_2` locations :math:`y\in\mathbb{R}^{D_2}`,
outputs an approximation of the regularized OT cost for point clouds.
Args:
eps (float): regularization coefficient
max_iter (int): maximum number of Sinkhorn iterations
reduction (string, optional): Specifies the reduction to apply to the output:
'none' | 'mean' | 'sum'. 'none': no reduction will be applied,
'mean': the sum of the output will be divided by the number of
elements in the output, 'sum': the output will be summed. Default: 'none'
Shape:
- Input: :math:`(N, P_1, D_1)`, :math:`(N, P_2, D_2)`
- Output: :math:`(N)` or :math:`()`, depending on `reduction`
"""
def __init__(self, eps, max_iter, reduction='none'):
super(SinkhornDistance, self).__init__()
self.eps = eps
self.max_iter = max_iter
self.reduction = reduction
def forward(self, x, y):
# The Sinkhorn algorithm takes as input three variables :
C = self._cost_matrix(x, y) # Wasserstein cost function
x_points = x.shape[-2]
y_points = y.shape[-2]
if x.dim() == 2:
batch_size = 1
else:
batch_size = x.shape[0]
# both marginals are fixed with equal weights
mu = torch.empty(batch_size, x_points, dtype=torch.float,
requires_grad=False).fill_(1.0 / x_points).squeeze()
nu = torch.empty(batch_size, y_points, dtype=torch.float,
requires_grad=False).fill_(1.0 / y_points).squeeze()
u = torch.zeros_like(mu)
v = torch.zeros_like(nu)
# To check if algorithm terminates because of threshold
# or max iterations reached
actual_nits = 0
# Stopping criterion
thresh = 1e-1
# Sinkhorn iterations
for i in range(self.max_iter):
u1 = u # useful to check the update
u = self.eps * (torch.log(mu+1e-8) - torch.logsumexp(self.M(C, u, v), dim=-1)) + u
v = self.eps * (torch.log(nu+1e-8) - torch.logsumexp(self.M(C, u, v).transpose(-2, -1), dim=-1)) + v
err = (u - u1).abs().sum(-1).mean()
actual_nits += 1
if err.item() < thresh:
break
U, V = u, v
# Transport plan pi = diag(a)*K*diag(b)
pi = torch.exp(self.M(C, U, V))
# Sinkhorn distance
cost = torch.sum(pi * C, dim=(-2, -1))
if self.reduction == 'mean':
cost = cost.mean()
elif self.reduction == 'sum':
cost = cost.sum()
return cost, pi, C
def M(self, C, u, v):
"Modified cost for logarithmic updates"
"$M_{ij} = (-c_{ij} + u_i + v_j) / \epsilon$"
return (-C + u.unsqueeze(-1) + v.unsqueeze(-2)) / self.eps
@staticmethod
def _cost_matrix(x, y, p=2):
"Returns the matrix of $|x_i-y_j|^p$."
x_col = x.unsqueeze(-2)
y_lin = y.unsqueeze(-3)
C = torch.sum((torch.abs(x_col - y_lin)) ** p, -1)
return C
@staticmethod
def ave(u, u1, tau):
"Barycenter subroutine, used by kinetic acceleration through extrapolation."
return tau * u + (1 - tau) * u1