scipy.stats.multivariate_normal高斯分布

参考地址:https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.multivariate_normal.html

scipy.stats.multivariate_normal

 

Parameters:

x : array_like

Quantiles, with the last axis of x denoting the components.

mean : array_like, optional

Mean of the distribution (default zero)

cov : array_like, optional

Covariance matrix of the distribution (default one)

Alternatively, the object may be called (as a function) to fix the mean

and covariance parameters, returning a “frozen” multivariate normal

random variable:

rv = multivariate_normal(mean=None, scale=1)

  • Frozen object with the same methods but holding the given mean and covariance fixed.

The probability density function for multivariate_normal is

f(x) = \frac{1}{\sqrt{(2 \pi)^k \det \Sigma}} \exp\left( -\frac{1}{2} (x - \mu)^T \Sigma^{-1} (x - \mu) \right),

where \mu is the mean, \Sigma the covariance matrix, and k is the dimension of the space where x takes values.

 

举例:

from scipy.stats import multivariate_normal
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 5, 10, endpoint=False)
x

 

y = multivariate_normal.pdf(x, mean=2.5, cov=0.5); y

 

plt.plot(x, y)

  • 5
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,让我来回答你的问题。首先,高斯混合聚类是一种基于概率密度函数的聚类算法。它假设数据集的每个样本都是由多个高斯分布组成的混合物。因此,该算法的目标是找到每个高斯分布的均值和方差,以及每个高斯分布在整个数据集的权重。在实际应用,高斯混合聚类被广泛用于图像分割、语音识别、金融分析等领域。 下面是一个使用Python和numpy库实现高斯混合聚类的示例代码,以iris数据集为例: ``` python import numpy as np from scipy.stats import multivariate_normal class GaussianMixtureModel: def __init__(self, n_components, n_iterations): self.n_components = n_components self.n_iterations = n_iterations def fit(self, X): # Initialize parameters n_samples, n_features = X.shape self.weights = np.ones(self.n_components) / self.n_components self.means = X[np.random.choice(n_samples, self.n_components, replace=False)] self.covariances = np.array([np.eye(n_features)] * self.n_components) # Expectation-maximization algorithm for i in range(self.n_iterations): # E-step: compute responsibilities pdfs = np.zeros((n_samples, self.n_components)) for j in range(self.n_components): pdfs[:, j] = self.weights[j] * multivariate_normal.pdf(X, self.means[j], self.covariances[j]) self.responsibilities = pdfs / np.sum(pdfs, axis=1, keepdims=True) # M-step: update parameters self.weights = np.mean(self.responsibilities, axis=0) self.means = np.dot(self.responsibilities.T, X) / np.sum(self.responsibilities, axis=0)[:, np.newaxis] for j in range(self.n_components): diff = X - self.means[j] self.covariances[j] = np.dot(self.responsibilities[:, j] * diff.T, diff) / np.sum(self.responsibilities[:, j]) def predict(self, X): pdfs = np.zeros((X.shape[0], self.n_components)) for j in range(self.n_components): pdfs[:, j] = self.weights[j] * multivariate_normal.pdf(X, self.means[j], self.covariances[j]) return np.argmax(pdfs, axis=1) # Load iris dataset from sklearn.datasets import load_iris X, y = load_iris(return_X_y=True) # Fit Gaussian mixture model gmm = GaussianMixtureModel(n_components=3, n_iterations=100) gmm.fit(X) # Predict clusters y_pred = gmm.predict(X) # Print accuracy from sklearn.metrics import accuracy_score print("Accuracy:", accuracy_score(y, y_pred)) ``` 在上面的代码,我们首先定义了一个GaussianMixtureModel类,它包含两个参数:n_components表示高斯混合模型高斯分布的数量,n_iterations表示期望最大化算法的最大迭代次数。在fit方法,我们首先初始化模型的参数:权重、均值、协方差矩阵。然后,我们使用期望最大化算法迭代更新模型的参数,其E步骤计算每个样本属于每个高斯分布的概率,M步骤更新模型的参数。 在使用上面的代码对iris数据集进行训练和预测后,我们可以使用sklearn.metrics库的accuracy_score函数计算聚类的准确率。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值