EM算法实例(Python)

  EM算法实例
  通过实例可以快速了解EM算法的基本思想,具体推导请点文末链接。图a是让我们预热的,图b是EM算法的实例。
  这是一个抛硬币的例子,H表示正面向上,T表示反面向上,参数θ表示正面朝上的概率。硬币有两个,A和B,硬币是有偏的。本次实验总共做了5组,每组随机选一个硬币,连续抛10次。如果知道每次抛的是哪个硬币,那么计算参数θ就非常简单了,如下图所示:
在这里插入图片描述  如果不知道每次抛的是哪个硬币呢?那么,我们就需要用EM算法,基本步骤为:
  1、给 θ A θ_A θA θ B θ_B θB一个初始值;
  2、(E-step)估计每组实验是硬币A的概率(本组实验是硬币B的概率=1-本组实验是硬币A的概率)。分别计算每组实验中,选择A硬币且正面朝上次数的期望值,选择B硬币且正面朝上次数的期望值;
  3、(M-step)利用第三步求得的期望值重新计算 θ A θ_A θA θ B θ_B θB
  4、当迭代到一定次数,或者算法收敛到一定精度,结束算法,否则,回到第2步。
在这里插入图片描述
  计算过程详解:初始值 θ A ( 0 ) θ_A^{(0)} θA(0)=0.6, θ B ( 0 ) θ_B^{(0)} θB(0)=0.5。
由两个硬币的初始值0.6和0.5,容易得出投掷出5正5反的概率是 p A = C 10 5 ∗ ( 0. 6 5 ) ∗ ( 0. 4 5 ) p_A=C^5_{10}*(0.6^5)*(0.4^5) pA=C105(0.65)(0.45) p B = C 10 5 ∗ ( 0. 5 5 ) ∗ ( 0. 5 5 ) p_B=C_{10}^5*(0.5^5)*(0.5^5) pB=C105(0.55)(0.55), p A p_A pA/( p A p_A pA+ p B p_B pB)=0.449, 0.45就是0.449近似而来的,表示第一组实验选择的硬币是A的概率为0.45。然后,0.449 * 5H = 2.2H ,0.449 * 5T = 2.2T ,表示第一组实验选择A硬币且正面朝上次数和反面朝上次数的期望值都是2.2,其他的值依次类推。最后,求出 θ A ( 1 ) θ_A^{(1)} θA(1)=0.71, θ B ( 1 ) θ_B^{(1)} θB(1)=0.58。重复上述过程,不断迭代,直到算法收敛到一定精度为止。
  这篇博客对EM算法的推导非常详细,链接如下:

https://blog.csdn.net/zhihua_oba/article/details/73776553

  Python实现

#coding=utf-8
from numpy import *
from scipy import stats
import time
start = time.perf_counter()

def em_single(priors,observations):
    """
    EM算法的单次迭代
    Arguments
    ------------
    priors:[theta_A,theta_B]
    observation:[m X n matrix]

    Returns
    ---------------
    new_priors:[new_theta_A,new_theta_B]
    :param priors:
    :param observations:
    :return:
    """
    counts = {'A': {'H': 0, 'T': 0}, 'B': {'H': 0, 'T': 0}}
    theta_A = priors[0]
    theta_B = priors[1]
    #E step
    for observation in observations:
        len_observation = len(observation)
        num_heads = observation.sum()
        num_tails = len_observation-num_heads
        #二项分布求解公式
        contribution_A = stats.binom.pmf(num_heads,len_observation,theta_A)
        contribution_B = stats.binom.pmf(num_heads,len_observation,theta_B)

        weight_A = contribution_A / (contribution_A + contribution_B)
        weight_B = contribution_B / (contribution_A + contribution_B)
        #更新在当前参数下A,B硬币产生的正反面次数
        counts['A']['H'] += weight_A * num_heads
        counts['A']['T'] += weight_A * num_tails
        counts['B']['H'] += weight_B * num_heads
        counts['B']['T'] += weight_B * num_tails

    # M step
    new_theta_A = counts['A']['H'] / (counts['A']['H'] + counts['A']['T'])
    new_theta_B = counts['B']['H'] / (counts['B']['H'] + counts['B']['T'])
    return [new_theta_A,new_theta_B]


def em(observations,prior,tol = 1e-6,iterations=10000):
    """
    EM算法
    :param observations :观测数据
    :param prior:模型初值
    :param tol:迭代结束阈值
    :param iterations:最大迭代次数
    :return:局部最优的模型参数
    """
    iteration = 0;
    while iteration < iterations:
        new_prior = em_single(prior,observations)
        delta_change = abs(prior[0]-new_prior[0])
        if delta_change < tol:
            break
        else:
            prior = new_prior
            iteration +=1
    return [new_prior,iteration]

#硬币投掷结果
observations = array([[1,0,0,0,1,1,0,1,0,1],
                        [1,1,1,1,0,1,1,1,0,1],
                        [1,0,1,1,1,1,1,0,1,1],
                        [1,0,1,0,0,0,1,1,0,0],
                        [0,1,1,1,0,1,1,1,0,1]])
print (em(observations,[0.6,0.5]))
end = time.perf_counter()
print('Running time: %f seconds'%(end-start))
  • 10
    点赞
  • 32
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是一个简单的 EM 算法的 Python 代码实例,用于估计高斯混合模型的参数: ```python import numpy as np # 初始化高斯混合模型参数 def init_params(data, k): n = data.shape[0] d = data.shape[1] # 初始化权重 weights = np.ones(k) / k # 初始化均值 means = np.random.randn(k, d) # 初始化协方差矩阵 covs = np.zeros((k, d, d)) for i in range(k): covs[i] = np.eye(d) return weights, means, covs # E 步 def e_step(data, weights, means, covs): k = weights.shape[0] n = data.shape[0] # 初始化后验概率矩阵 gamma = np.zeros((n, k)) # 计算后验概率 for i in range(n): for j in range(k): gamma[i, j] = weights[j] * normal_pdf(data[i], means[j], covs[j]) gamma[i] /= np.sum(gamma[i]) return gamma # M 步 def m_step(data, gamma): k = gamma.shape[1] n = data.shape[0] d = data.shape[1] # 更新权重 weights = np.sum(gamma, axis=0) / n # 更新均值 means = np.zeros((k, d)) for j in range(k): for i in range(n): means[j] += gamma[i, j] * data[i] means[j] /= np.sum(gamma[:, j]) # 更新协方差矩阵 covs = np.zeros((k, d, d)) for j in range(k): for i in range(n): diff = data[i] - means[j] covs[j] += gamma[i, j] * np.outer(diff, diff) covs[j] /= np.sum(gamma[:, j]) return weights, means, covs # 计算多元高斯分布密度函数值 def normal_pdf(x, mean, cov): d = x.shape[0] coeff = 1.0 / np.sqrt((2*np.pi)**d * np.linalg.det(cov)) diff = x - mean exponent = -0.5 * np.dot(np.dot(diff.T, np.linalg.inv(cov)), diff) return coeff * np.exp(exponent) # EM 算法 def em_algorithm(data, k, max_iter): weights, means, covs = init_params(data, k) for i in range(max_iter): gamma = e_step(data, weights, means, covs) weights, means, covs = m_step(data, gamma) return weights, means, covs ``` 其中,`data` 是输入数据,`k` 是高斯混合模型的个数,`max_iter` 是最大迭代次数。在 `init_params` 函数中,我们通过随机初始化来初始化高斯混合模型的参数。在 `e_step` 函数中,我们计算后验概率矩阵。在 `m_step` 函数中,我们使用后验概率矩阵来更新高斯混合模型的参数。在 `normal_pdf` 函数中,我们计算多元高斯分布密度函数值。最后,在 `em_algorithm` 函数中,我们使用 EM 算法来估计高斯混合模型的参数。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值