熵 、贝叶斯定义—— 2018-07-02 07:29:09


Date Unknown Interpretations Source
2018-05-16 09:14:18
2018年5月17日18:42:15
Bayesian inference下解释 D K L ( P ∥ Q ) D_{KL}(P\Vert Q) DKL(PQ) D K L ( P ∥ Q ) D_{KL}(P\Vert Q) DKL(PQ) is a measure of the information gained when one revises(修改) one’s beliefs from the prior probability distribution Q to the posterior probability distribution P. In other words, it is the amount of information lost when Q is used to approximate P. In applications, P typically represents the “true” distribution of data, observations, or a precisely calculated theoretical distribution, while Q typically represents a theory, model, description, or approximation of P. In order to find a distribution Q that is closest to P, we can minimize KL divergence and compute an information projection.
2. Imagine a coder that is designed for a source that generates symbols according to a probability distribution Q.What happens if the source generates symbols drawn from a different probability distribution, P? If the coder had been designed for P (instead of for Q), it would need to generate H ( P ) H(P) H(P) bits per symbol.But in this case, our coder was designed for Q. So it ends up generating H ( P , Q ) H(P,Q) H(P,Q) bits per symbol. (This is the “cross entropy” between P and Q.) The difference between H ( P , Q ) H(P,Q) H(P,Q) and
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值