CMU 11-785 L22 Revisiting EM algorithm and generative models

Key points

  • EM: An iterative technique to estimate probability models for data with missing components or information
    • By iteratively “completing” the data and reestimating parameters
  • PCA: Is actually a generative model for Gaussian data
    • Data lie close to a linear manifold, with orthogonal noise
    • A lienar autoencoder!
  • Factor Analysis: Also a generative model for Gaussian data
  • Data lie close to a linear manifold
  • Like PCA, but without directional constraints on the noise (not necessarily orthogonal)

Generative models

Learning a generative model

  • You are given some set of observed data X = { x } X=\{x\} X={ x}
  • You choose a model P ( x ; θ ) P(x ; \theta) P(x;θ) for the distribution of x x x
    • θ \theta θ are the parameters of the model
  • Estimate the theta such that P ( x ; θ ) P(x ; \theta) P(x;θ) best “fits” the observations X = { x } X=\{x\} X={ x}
  • How to define “best fits”?
    • Maximum likelihood!
    • Assumption: The data you have observed are very typical of the process

EM algorithm

  • Tackle missing data and information problem in model estimation
  • Let o o o are observed data

log ⁡ P ( o ) = log ⁡ ∑ h P ( h , o ) = log ⁡ ∑ h Q ( h ) P ( h , o ) Q ( h ) \log P(o)=\log \sum_{h} P(h, o)=\log \sum_{h} Q(h) \frac{P(h, o)}{Q(h)} logP(o)=loghP(h,o)=loghQ(h)Q(h)P(h,o)

  • The logarithm is a concave function, therefore

log ⁡ ∑ h Q ( h ) P ( h , o ) Q ( h ) ≥ ∑ h

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值