机器学习——朴素贝叶斯算法

概率论相关知识点

条件概率:A,B为两个事件,且 P ( A ) > 0 P(A)\gt 0 P(A)>0,称 P ( B ∣ A ) = P ( A B ) P ( A ) P(B|A)=\frac{P(AB)}{P(A)} P(BA)=P(A)P(AB)为在事件A发生的条件下事件B发生的条件概率

事件的独立性:若A,B两个事件相互独立,则 P ( A B ) = P ( A ) P ( B ) P(AB)=P(A)P(B) P(AB)=P(A)P(B),。

贝叶斯公式:设 B 1 , B 2 , ⋯   , B n B_1,B_2,\cdots,B_n B1,B2,,Bn为样本空间中概率均不为零的一个完备事件组,则对任意事件A,且 P ( A ) > 0 P(A)\gt 0 P(A)>0,有:
P ( B j ∣ A ) = P ( B j ) P ( A ∣ B j ) ∑ i = 1 n P ( B i ) P ( A ∣ B i ) , j = 1 , 2 ⋯   , n P(B_j|A)=\frac{P(B_j)P(A|B_j)}{\sum\limits_{i=1}^nP(B_i)P(A|B_i)},j=1,2\cdots,n P(BjA)=i=1nP(Bi)P(ABi)P(Bj)P(ABj),j=1,2,n
朴素贝叶斯(Naive Bayes)算法是假设各个特征之间相互独立的情况下,通过特征向量 x \mathbf{x} x,结合概率公式计算 P ( c ∣ x ) P(c|\mathbf{x}) P(cx),选择概率最大的类别标记。

基于特征条件独立性假设,有:
P ( c ∣ x ) = P ( c ) P ( x ∣ c ) P ( x ) = P ( c ) P ( x ) ∏ i = 1 d P ( x i ∣ c ) , 其 中 d 为 属 性 数 目 , x i 为 x 在 第 i 个 属 性 上 的 取 值 P(c|\mathbf{x})=\frac{P(c)P(\mathbf x|c)}{P(\mathbf x)}=\frac{P(c)}{P(\mathbf x)}\prod\limits_{i=1}^dP(x_i|c),其中d为属性数目,x_i为\mathbf x在第i个属性上的取值 P(cx)=P(x)P(c)P(xc)=P(x)P(c)i=1dP(xic),dxixi

由于对所有类别来说 P ( x ) P(\mathbf x) P(x)相同,所以得到贝叶斯判定准则:
h n b ( x ) = arg ⁡   max ⁡ c ∈ y P ( c ) ∏ i = 1 d P ( x i ∣ c ) , 其 中 , N 表 示 样 本 类 别 标 记 的 总 类 , y = { c 1 , c 2 , ⋯   , c N } h_{nb}(\mathbf x)=\arg\ \max_{c\in \mathbf y}P(c)\prod\limits_{i=1}^dP(x_i|c),其中,N表示样本类别标记的总类,\mathbf{y}=\left\{c_1,c_2,\cdots,c_N\right\} hnb(x)=arg cymaxP(c)i=1dP(xic)Ny={c1,c2,,cN}

  • 根据大数定理,当训练集包含充足的独立同分布样本时, P ( c ) P(c) P(c)可通过各类样本出现的频率进行估计,令 D c D_c Dc表示训练集 D D D中第 c c c类样本的集合,因此可估计: P ( c ) = ∣ D c ∣ ∣ D ∣ P(c)=\frac{|D_c|}{|D|} P(c)=DDc
  • 对离散属性而言,令 D c , x i D_{c,x_i} Dc,xi表示 D c D_c Dc中在第 i i i个属性上取值为 x i x_i xi的样本组成的集合,则条件概率 P ( x i ∣ c ) P(x_i|c) P(xic)可估计为: P ( x i ∣ c ) = ∣ D c , x i ∣ ∣ D c ∣ P(x_i|c)=\frac{|D_{c,x_i}|}{|D_c|} P(xic)=DcDc,xi;
  • 对连续属性考虑概率密度函数,假设 p ( x i ∣ c ) ∼ N ( μ c , i , σ c , i 2 ) p(x_i|c) \sim N(\mu_{c,i},\sigma^2_{c,i}) p(xic)N(μc,i,σc,i2),其中 μ c , i \mu_{c,i} μc,i σ c , i 2 \sigma^2_{c,i} σc,i2分别表示第 c c c类样本在第 i i i个属性上取值的取值和方差,则 p ( x i ∣ c ) = 1 2 π σ c , i e x p ( − ( x i − μ c , i ) 2 2 σ c , i 2 ) p(x_i|c)=\frac{1}{\sqrt{2\pi}\sigma_{c,i}}exp\left(-\frac{(x_i-\mu_{c,i})^2}{2\sigma^2_{c,i}}\right) p(xic)=2π σc,i1exp(2σc,i2(xiμc,i)2)

不难发现,当乘式中的一个因子为0时(某个属性值在训练集中没有鱼某个类同时出现过),整个乘式的结果都为0,导致判别时出现问题。所以先在通过拉普拉斯修正来避免其他属性携带的信息被训练集中未出现的属性值”抹去“:
P ^ ( c ) = ∣ D c ∣ + 1 ∣ D ∣ + N P ^ ( x i ∣ c ) = ∣ D c , x i ∣ + 1 ∣ D c ∣ + N i 其 中 , N 是 D 中 可 能 的 类 别 数 , N i 表 示 第 i 个 属 性 可 能 的 取 值 数 \hat P(c)=\frac{|D_c|+1}{|D|+N}\\\hat P(x_i|c)=\frac{|D_{c,x_i}|+1}{|D_c|+N_i}\\其中,N是D中可能的类别数,N_i表示第i个属性可能的取值数 P^(c)=D+NDc+1P^(xic)=Dc+NiDc,xi+1NDNii
因此:
P ^ ( c ) > 0 且 ∑ i = 1 N P ^ ( c i ) = 1 \hat P(c)>0且\sum_{i=1}^N\hat P(c_i)=1 P^(c)>0i=1NP^(ci)=1
P ^ ( x i ∣ c ) \hat P(x_i|c) P^(xic)同理。

由于连乘操作容易造成下溢,所以通常使用对数进行计算,综上:
h n b ( x ) = arg ⁡   max ⁡ c ∈ y log ⁡ P ^ ( c ) + ∑ i = 1 d log ⁡ P ^ ( x i ∣ c ) h_{nb}(\mathbf x)=\arg\ \max_{c\in \mathbf y}\log \hat P(c)+\sum\limits_{i=1}^d\log\hat P(x_i|c) hnb(x)=arg cymaxlogP^(c)+i=1dlogP^(xic)
对于连续型属性值,有:
h n b ( x ) = arg ⁡   max ⁡ 1 ≤ i ≤ N log ⁡ P ^ ( c i ) + ∑ j = 1 d ( − l o g σ i j − ( x j − μ i j ) 2 2 σ i j 2 ) h_{nb}(\mathbf x)=\arg\ \max_{1\le i\le N}\log \hat P(c_i)+\sum\limits_{j=1}^d(-log\sigma_{ij}-\frac{(x_j-\mu_{ij})^2}{2\sigma^2_{ij}}) hnb(x)=arg 1iNmaxlogP^(ci)+j=1d(logσij2σij2(xjμij)2)
代码:

/**
	 * An inner class to store parameters.
	 */
	private class GaussianParameters {
		double mu;
		double sigma;

		public GaussianParameters(double paraMu, double paraSigma) {
			mu = paraMu;
			sigma = paraSigma;
		}// Of the constructor

		public String toString() {
			return "(" + mu + "," + sigma + ")";
		}// Of toString
	}// Of GaussianParamters

	/**
	 * The data.
	 */
	Instances dataset;

	/**
	 * The number of instances.
	 */
	int numClasses;

	/**
	 * The number of instances.
	 */
	int numInstances;

	/**
	 * The number of conditional attributes.
	 */
	int numConditions;

	/**
	 * The prediction,including queried and predicted labels.
	 */
	int[] predicts;

	/**
	 * Class distribution.
	 */
	double[] classDistribution;

	/**
	 * Class distribution with Laplacian smooth.
	 */
	double[] classDistributionLaplacian;

	/**
	 * To calculate the conditional probabilities for all classes over all
	 * attributes on all values.
	 */
	double[][][] conditionalCounts;

	/**
	 * The conditional probabilities with Laplacian smooth.
	 */
	double[][][] conditionalProbabilitiesLaplacian;

	/**
	 * The Gaussian parameters.
	 */
	GaussianParameters[][] gaussianParameters;

	/**
	 * Data type.
	 */
	int dataType;

	/**
	 * Nominal.
	 */
	public static final int NOMINAL = 0;

	/**
	 * Numerical.
	 */
	public static final int NUMERICAL = 1;

在构造函数中读取样本:

	/**
	 *********************
	 * The constructor.
	 *
	 * @param paraFilename The given file.
	 *********************
	 */
	public NaiveBayes(String paraFilename) {
		dataset = null;
		try {
			FileReader fileReader = new FileReader(paraFilename);
			dataset = new Instances(fileReader);
			fileReader.close();
		} catch (Exception ee) {
			System.out.println("Cannot open the file: " + paraFilename + "\r\n" + ee);
			System.exit(0);
		} // Of try
		dataset.setClassIndex(dataset.numAttributes() - 1);
		numConditions = dataset.numAttributes() - 1;
		numInstances = dataset.numInstances();
		numClasses = dataset.attribute(numConditions).numValues();
	}// Of the constructor.

设置dataType的setter:

	/**
	 ********************
	 * Set the data type.
	 *********************
	 */
	public void setDataType(int paraDataType) {
		dataType = paraDataType;
	}// Of setDataType

计算 P ( c ) 、 P ^ ( c ) P(c)、\hat P(c) P(c)P^(c)
使用数组tempCounts记录每个类别标记的总数,相当于记录上面式子中的 ∣ D c ∣ |D_c| Dc

	/**
	 ********************
	 * Calculate the class distribution with Laplacian smooth.
	 *********************
	 */
	public void calculateClassDistribution() {
		classDistribution = new double[numClasses];
		classDistributionLaplacian = new double[numClasses];

		double[] tempCounts = new double[numClasses];
		for (int i = 0; i < numInstances; i++) {
			int tempClassValue = (int) dataset.instance(i).classValue();
			tempCounts[tempClassValue]++;
		} // Of for i

		for (int i = 0; i < numClasses; i++) {
			classDistribution[i] = tempCounts[i] / numInstances;
			classDistributionLaplacian[i] = (tempCounts[i] + 1) / (numInstances + numClasses);
		} // Of for i

		System.out.println("Class distribution: " + Arrays.toString(classDistribution));
		System.out.println("Class distribution Laplacian: " + Arrays.toString(classDistributionLaplacian));
	}// Of calculateClassDistribution

对离散型属性:
计算 P ^ ( x i ∣ c ) \hat P(x_i|c) P^(xic)
对conditionalCounts[][][]的理解(关键和难点):
第一维 i 表示类别,第二维 j 表示属性,第三维 k 表示属性下对应的取值,这个三维数组的值表示:类别为 i 的样本中的 j 属性的属性值为 k 的样本数。
以weather数据为例:
在这里插入图片描述
conditionalCounts[‘N’][‘Outlook’][‘Sunny’]=3

	/**
	 ********************
	 * Calculate the conditional probabilities with Laplacian smooth.Only scan the
	 * data set once.
	 *********************
	 */
	public void calculateConditionalProbabilities() {
		conditionalCounts = new double[numClasses][numConditions][];
		conditionalProbabilitiesLaplacian = new double[numClasses][numConditions][];

		// Allocate space.
		for (int i = 0; i < numClasses; i++) {
			for (int j = 0; j < numConditions; j++) {
				int tempNumValues = (int) dataset.attribute(j).numValues();
				conditionalCounts[i][j] = new double[tempNumValues];
				conditionalProbabilitiesLaplacian[i][j] = new double[tempNumValues];
			} // Of for j
		} // Of for i

		// Count the numbers
		int[] tempClassCounts = new int[numClasses];
		for (int i = 0; i < numInstances; i++) {
			int tempClass = (int) dataset.instance(i).classValue();
			tempClassCounts[tempClass]++;
			for (int j = 0; j < numConditions; j++) {
				int tempValue = (int) dataset.instance(i).value(j);
				conditionalCounts[tempClass][j][tempValue]++;
			} // Of for j
		} // Of for i

		// Now for the real probability with Laplacian
		for (int i = 0; i < numClasses; i++) {
			for (int j = 0; j < numConditions; j++) {
				int tempNumValues = (int) dataset.attribute(j).numValues();
				for (int k = 0; k < tempNumValues; k++) {
					conditionalProbabilitiesLaplacian[i][j][k] = (conditionalCounts[i][j][k] + 1)
							/ (tempClassCounts[i] + tempNumValues);
				} // Of for k
			} // Of for j
		} // Of for i
		System.out.println("Conditional probabilities: " + Arrays.deepToString(conditionalCounts));
	}// Of calculationConditionalProbabilities

分类:
通过 P ^ ( c ) 、 P ^ ( x i ∣ c ) \hat P(c)、\hat P(x_i|c) P^(c)P^(xic) h n b ( x ) = arg ⁡   max ⁡ c ∈ y log ⁡ P ^ ( c ) + ∑ i = 1 d log ⁡ P ^ ( x i ∣ c ) h_{nb}(\mathbf x)=\arg\ \max_{c\in \mathbf y}\log \hat P(c)+\sum\limits_{i=1}^d\log\hat P(x_i|c) hnb(x)=arg maxcylogP^(c)+i=1dlogP^(xic)的过程:

	/**
	 ********************
	 * Classify an instance with nominal data.
	 *********************
	 */
	public int classifyNominal(Instance paraInstance) {
		// Find the biggest one
		double tempBiggest = -10000;
		int resultBestIndex = 0;
		for (int i = 0; i < numClasses; i++) {
			double tempPseudoProbability = Math.log(classDistributionLaplacian[i]);
			for (int j = 0; j < numConditions; j++) {
				int tempAttributeValue = (int) paraInstance.value(j);
				tempPseudoProbability += Math.log(conditionalProbabilitiesLaplacian[i][j][tempAttributeValue]);
			} // Of for j

			if (tempBiggest < tempPseudoProbability) {
				tempBiggest = tempPseudoProbability;
				resultBestIndex = i;
			} // Of if
		} // Of for i
		return resultBestIndex;
	}// Of classifyNominal

对连续型属性:
每个类别下的不同特征都有一组高斯参数,所以:
gaussianParameters = new GaussianParameters[numClasses][numConditions];
μ \mu μ是均值,所以先计算该类别标记下,某个特征的特征值的和,然后除以该标记的样本数得到 μ \mu μ,然后再用 μ \mu μ去求 σ \sigma σ

	/**
	 ********************
	 * Calculate the conditional probabilities with Laplacian smooth.
	 *********************
	 */
	public void calculateGaussianParameters() {
		gaussianParameters = new GaussianParameters[numClasses][numConditions];

		double[] tempValuesArray = new double[numInstances];
		int tempNumValues = 0;
		double tempSum = 0;

		for (int i = 0; i < numClasses; i++) {
			for (int j = 0; j < numConditions; j++) {
				tempSum = 0;

				// Obtain values for this class.
				tempNumValues = 0;
				for (int k = 0; k < numInstances; k++) {
					if ((int) dataset.instance(k).classValue() != i) {
						continue;
					} // Of if

					tempValuesArray[tempNumValues] = dataset.instance(k).value(j);
					tempSum += tempValuesArray[tempNumValues];
					tempNumValues++;
				} // Of for k

				// Obtain parameters.
				double tempMu = tempSum / tempNumValues;

				double tempSigma = 0;
				for (int k = 0; k < tempNumValues; k++) {
					tempSigma += (tempValuesArray[k] - tempMu) * (tempValuesArray[k] - tempMu);
				} // Of for k
				tempSigma /= tempNumValues;
				tempSigma = Math.sqrt(tempSigma);
				gaussianParameters[i][j] = new GaussianParameters(tempMu, tempSigma);
			} // Of for j
		} // Of for i
		System.out.println(Arrays.deepToString(gaussianParameters));
	}// Of calculateGaussianParameters

分类:

	/**
	 ********************
	 * Classify an instance with numerical data.
	 *********************
	 */
	public int classifyNumerical(Instance paraInstance) {
		// Find the biggest one
		double tempBiggest = -10000;
		int resultBestIndex = 0;

		for (int i = 0; i < numClasses; i++) {
			double tempPseudoProbability = Math.log(classDistributionLaplacian[i]);
			for (int j = 0; j < numConditions; j++) {
				double tempAttributeValue = paraInstance.value(j);
				double tempSigma = gaussianParameters[i][j].sigma;
				double tempMu = gaussianParameters[i][j].mu;

				tempPseudoProbability += -Math.log(tempSigma)
						- (tempAttributeValue - tempMu) * (tempAttributeValue - tempMu) / (2 * tempSigma * tempSigma);
			} // Of for j

			if (tempBiggest < tempPseudoProbability) {
				tempBiggest = tempPseudoProbability;
				resultBestIndex = i;
			} // Of if
		} // Of for i
		return resultBestIndex;
	}// Of classifyNumerical

使用leave-one-out测试:

	/**
	 ********************
	 * Classify all instances, the results are stored in predicts[].
	 *********************
	 */
	public void classify() {
		predicts = new int[numInstances];
		for (int i = 0; i < numInstances; i++) {
			predicts[i] = classify(dataset.instance(i));
		} // Of for i
	}// Of classify

通过classify整合两个分类方法:

	/**
	 ********************
	 * Classify an instance.
	 *********************
	 */
	public int classify(Instance paraInstance) {
		if (dataType == NOMINAL) {
			return classifyNominal(paraInstance);
		} else if (dataType == NUMERICAL) {
			return classifyNumerical(paraInstance);
		} // Of if
		return -1;
	}// Of classify

离散型测试:

	/**
	 ********************
	 * Test nominal data.
	 *********************
	 */
	public static void testNominal() {
		System.out.println("Hello, Naive Bayes. I only want to test the nominal data.");
		 String tempFilename = "F:/sampledataMain/mushroom.arff";
		NaiveBayes tempLearner = new NaiveBayes(tempFilename);
		tempLearner.setDataType(NOMINAL);
		tempLearner.calculateClassDistribution();
		tempLearner.calculateConditionalProbabilities();
		tempLearner.classify();

		System.out.println("The accuracy is: " + tempLearner.computeAccuracy());
	}// Of testNominal

连续型测试:

	/**
	 ********************
	 * Test numerical data.
	 *********************
	 */
	public static void testNumerical() {
		System.out.println("Hello, Naive Bayes. I only want to test the numerical data with Gaussian assumption.");
		String tempFilename = "F:/sampledataMain/iris.arff";

		NaiveBayes tempLearner = new NaiveBayes(tempFilename);
		tempLearner.setDataType(NUMERICAL);
		tempLearner.calculateClassDistribution();
		tempLearner.calculateGaussianParameters();
		tempLearner.classify();

		System.out.println("The accuracy is: " + tempLearner.computeAccuracy());
	}// Of testNumerical

获取准确率:

	/**
	 ********************
	 * Compute accuracy.
	 *********************
	 */
	public double computeAccuracy() {
		double tempCorrect = 0;
		for (int i = 0; i < numInstances; i++) {
			if (predicts[i] == (int) dataset.instance(i).classValue()) {
				tempCorrect++;
			} // Of if
		} // Of for i

		double resultAccuracy = tempCorrect / numInstances;
		return resultAccuracy;
	}// Of computeAccuracy

主函数:

	/**
	 ********************
	 * The entrance of the program.
	 * 
	 * @param args Not used now.
	 *********************
	 */
	public static void main(String[] args) {
		 testNominal();
		testNumerical();
	}// Of main

运行结果:
在这里插入图片描述
朴素贝叶斯算法通过特征独立性假设使条件概率估计变得相对简单,但也损失了一定准确性。对于连续型属性,我们需要先根据其值的分布情况假设其概率分布形式,进而才能进行参数估计,这也使得准确性严重依赖假设的分布形式和真实数据的分布形式。
与之前学过的kNN算法一样,朴素贝叶斯算法也属于监督学习方法,但kNN算法是通过特征向量直接建模来预测类别(判别式模型),朴素贝叶斯算法实现通过概率分布建模,获得参数,再获得类别(生成式模型)。与kNN算法相比,kNN可以通过k个近邻值进行回归,但朴素贝叶斯只能用于分类。

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值