引用wiki上的话
“A model of an unknown probability distribution p, may be proposed based on a training sample that was drawn from p. Given a proposed probability model q, one may evaluate q by asking how well it predicts a separate test sample x1, x2, ..., xN also drawn from p. The perplexity of the model q is defined as
where b is customarily 2. Better models q of the unknown distribution p will tend to assign higher probabilities q(xi) to the test events. Thus, they have lower perplexity: they are less surprised by the test sample.”
p为未知的概率分布,q为训练得到的概率模型,一般来说,训练的概率模型越好,q(x)会趋向于给的更高,困惑度就会越低,对测试集的泛化能力越好。
“The exponent may also be regarded as a cross-entropy,
P = 2^H(p,q)
where p denotes the empirical distribution of the test sample (i.e., p(x)=n/N} if x appeared n times in the test sample of size N).”
困惑度也可以认为是交叉熵的指数形式,两者都是用于度量两个概率分布间的差异性信息,p为测试集的经验分布,q为训练出的概率模型,两个模型越接近,交叉熵越小,困惑度也越小。