学习笔记
文章平均质量分 94
Thomas zcy
这个作者很懒,什么都没留下…
展开
-
Bayesian Neural Network Recent Papers-贝叶斯神经网络相关研究文章
Bayesian Neural Network Recent Papers-贝叶斯神经网络相关研究文章MethodsVariational Inference (VI)Markov Chain Monte CarloMCMC + VIEnsembling Sampling (ES)Particle OptimizationLaplace ApproximationExpectation Propgation (EP)OthersTheoryGaussian ProcessDropoutIssuesOther原创 2020-11-05 15:12:36 · 1517 阅读 · 0 评论 -
变分推理-ELBO
变分推理变分推理的原理等价于最小化KL散度:KL(qθ(ω)∥p(ω∣D))=∫qθ(ω)logqθ(ω)p(ω∣D)dωK L\left(q_{\boldsymbol{\theta}}(\boldsymbol{\omega}) \| p(\boldsymbol{\omega} \mid \mathcal{D})\right)=\int q_{\boldsymbol{\theta}}(\boldsymbol{\omega}) \log \frac{q_{\boldsymbol{\theta}}(\b原创 2020-11-04 20:10:58 · 3268 阅读 · 0 评论 -
Confusion Matrix-混淆矩阵
Confusion Matrix-混淆矩阵How in the hell can we measure the effectiveness of our model. Better the effectiveness, better the performance and that’s exactly what we want. And it is where the Confusion matrix comes into the limelight. Confusion Matrix is a perf原创 2020-11-01 14:16:19 · 171 阅读 · 0 评论 -
交叉熵 & K-L散度
交叉熵 & K-L散度交叉熵信息论主要研究如何量化数据中的信息。最重要的信息度量单位是熵Entropy,一般用HHH表示。分布的熵的公式如下:H=−∑i=1Np(xi)⋅logp(xi)H=-\sum_{i=1}^{N} p\left(x_{i}\right) \cdot \log p\left(x_{i}\right)H=−i=1∑Np(xi)⋅logp(xi)example:????((1, 0, 0), (0.5, 0.2, 0.3)) = -log 0.5 ≈ 0.30原创 2020-11-01 11:37:47 · 423 阅读 · 0 评论