Raki的统计学习方法笔记0x4章:朴素贝叶斯法

朴素贝叶斯(naïve Bayes)法是基于贝叶斯定理与特征条件独立假设的分类方法。对于给定的训练数据集,首先基于特征条件独立假设学习输入/输出的联合概率分布;然后基于此模型,对给定的输入 x x x,利用贝叶斯定理求出后验概率最大的输出 y y y。朴素贝叶斯法实现简单,学习与预测的效率都很高,是一种常用的方法。

模型

朴素贝叶斯法通过训练数据集学习联合概率分布 P ( X , Y ) P(X,Y) P(X,Y),具体地,学习以下先验概率分布及条件概率分布
先验概率分布:
P ( Y = c k ) , k = 1 , 2 , . . . , K \begin{aligned} P(Y=c_k), k = 1,2,...,K \end{aligned} P(Y=ck),k=1,2,...,K

条件概率分布:
P ( X = x ∣ Y = c k ) = P ( X ( 1 ) = x ( 1 ) , X ( 2 ) = x ( 2 ) , . . . , X ( n ) = x ( n ) ∣ Y = c k ) k = 1 , 2 , . . . , K \begin{aligned} & P(X=x|Y=c_k) = P(X^{(1)}=x^{(1)},X^{(2)}=x^{(2)},...,X^{(n)}=x^{(n)}|Y=c_k) \\ & k = 1,2,...,K & \end{aligned} P(X=xY=ck)=P(X(1)=x(1),X(2)=x(2),...,X(n)=x(n)Y=ck)k=1,2,...,K
于是学习到联合概率分布 P ( X , Y ) P(X,Y) P(X,Y)

条件概率分布 P ( X = x ∣ Y = c k ) P(X=x|Y=c_k) P(XxYck)有指数级数量的参数,其估计实际是不可行的。事实上,假设 x ( j ) x^{(j)} x(j)可取值有 S j S_j Sj个, j = 1 , 2 , … , n , Y j=1,2,…,n,Y j1,2,,nY可取值有 K K K个,那么参数个数为 : K ∏ j = 1 n S j K\prod_{j=1}^nS_j Kj=1nSj

朴素贝叶斯法对条件概率分布作了条件独立性的假设。由于这是一个较强的假设,朴素贝叶斯法也由此得名。具体地,条件独立性假设是:
P ( X = x ∣ Y = c k ) = P ( X ( 1 ) = x ( 1 ) , X ( 2 ) = x ( 2 ) , . . . , X ( n ) = x ( n ) ∣ Y = c k ) = ∏ j = 1 n P ( X ( j ) = x ( j ) ∣ Y = c k ) \begin{aligned} & P(X=x|Y=c_k) = P(X^{(1)}=x^{(1)},X^{(2)}=x^{(2)},...,X^{(n)}=x^{(n)}|Y=c_k) \\ &= \prod_{j=1}^nP(X^{(j)}=x^{(j)}|Y = c_k) & \end{aligned} P(X=xY=ck)=P(X(1)=x(1),X(2)=x(2),...,X(n)=x(n)Y=ck)=j=1nP(X(j)=x(j)Y=ck)

朴素贝叶斯法实际上学习到生成数据的机制,所以属于生成模型。条件独立假设等于是说用于分类的特征在类确定的条件下都是条件独立的。这一假设使朴素贝叶斯法变得简单,但有时会牺牲一定的分类准确率。

根据贝叶斯定理:
P ( X = x ∣ Y = c k ) = P ( X = x ∣ Y = c k ) P ( Y = c k ) ∑ k P ( X = x ∣ Y = c k ) P ( Y = c k ) \begin{aligned} P(X=x|Y=c_k) = \frac{P(X=x|Y=c_k)P(Y=c_k)}{\sum_kP(X=x|Y=c_k)P(Y=c_k)} \end{aligned} P(X=xY=ck)=kP(X=xY=ck)P(Y=ck)P(X=xY=ck)P(Y=ck)

朴素贝叶斯分类器可以表示为:
y = arg ⁡ max ⁡ c k P ( Y = c k ) ∏ j = 1 n P ( X ( j ) = x ( j ) ∣ Y = c k ) ∑ k P ( Y = c k ) ∏ j = 1 n P ( X ( j ) = x ( j ) ∣ Y = c k ) k = 1 , 2 , . . . , K \begin{aligned} &y = \mathop{\arg\max}\limits_{c_k}\frac{P(Y = c_k)\prod_{j=1}^nP(X^{(j)}=x^{(j)}|Y = c_k)}{\sum _{k}P(Y = c_k)\prod_{j=1}^nP(X^{(j)}=x^{(j)}|Y = c_k)} \\ & k = 1,2,...,K & \end{aligned} y=ckargmaxkP(Y=ck)j=1nP(X(j)=x(j)Y=ck)P(Y=ck)j=1nP(X(j)=x(j)Y=ck)k=1,2,...,K
使用于是朴素贝叶斯分类器可以表示为:

y = arg ⁡ max ⁡ c k P ( Y = c k ) ∏ j = 1 P ( X ( j ) = x ( j ) ∣ Y = c k ) ∑ k P ( Y = c k ) ∏ j = 1 P ( X ( j ) = x ( j ) ∣ Y = c k ) \begin{aligned} y = \mathop{\arg\max}\limits_{c_k}\frac{P(Y = c_k)\prod_{j=1}P(X^{(j)}=x^{(j)}|Y = c_k)}{\sum_k P(Y = c_k)\prod_{j=1}P(X^{(j)}=x^{(j)}|Y = c_k)} \end{aligned} y=ckargmaxkP(Y=ck)j=1P(X(j)=x(j)Y=ck)P(Y=ck)j=1P(X(j)=x(j)Y=ck)

因为上式中分母对所有 c k c_k ck都是相同的,所以:
y = arg ⁡ max ⁡ c k P ( Y = c k ) ∏ j = 1 n P ( X ( j ) = x ( j ) ∣ Y = c k ) \begin{aligned} y = \mathop{\arg\max}\limits_{c_k}P(Y = c_k)\prod_{j=1}^nP(X^{(j)}=x^{(j)}|Y = c_k) \end{aligned} y=ckargmaxP(Y=ck)j=1nP(X(j)=x(j)Y=ck)

学习策略

在朴素贝叶斯法中,学习意味着估计 P ( Y = c k ) P(Y=c_k) P(Y=ck) P ( X ( j ) = x ( j ) ∣ Y = c k ) P(X^{(j)}=x^{(j)}|Y = c_k) P(X(j)=x(j)Y=ck)。可以应用极大似然估计法估计相应的概率。
先验概率 P ( Y = c k ) P(Y=c_{k}) P(Y=ck)的极大似然估计是:
P ( Y = c k ) = ∑ i = 1 N I ( y i = c k ) N , k = 1 , 2 , . . . , K \begin{aligned} & P(Y=c_{k}) = \frac{\sum_{i=1}^NI(y_i=c_k)}{N}, k=1,2,...,K \\ & \end{aligned} P(Y=ck)=Ni=1NI(yi=ck),k=1,2,...,K
设第j个特征 x ( j ) x^{(j)} x(j)可能的取值的合集为 { a j 1 , a j 2 , . . . , a j n } \{a_{j1},a_{j2},...,a_{jn}\} {aj1,aj2,...,ajn},条概率 P ( X ( j ) = a j l ∣ Y = c k ) P(X^{(j)}=a_{jl}|Y = c_k) P(X(j)=ajlY=ck)的极大似然估计是

P ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( x i ( j ) = a j l , y i = c k ) ∑ i = 1 N I ( y i = c k ) j = 1 , 2 , . . . , n ; l = 1 , 2 , . . . , S j ; k = 1 , 2 , . . . , K \begin{aligned} & P(X^{(j)}=a_{jl}|Y=c_k)=\frac{\sum_{i=1}^NI(x_i^{(j)}=a_{jl},y_i=c_k)}{\sum_{i=1}^NI(y_i=c_k)} \\ & j=1,2,...,n; l=1,2,...,S_j;k=1,2,...,K & \\ \end{aligned} P(X(j)=ajlY=ck)=i=1NI(yi=ck)i=1NI(xi(j)=ajl,yi=ck)j=1,2,...,n;l=1,2,...,Sj;k=1,2,...,K

学习算法

输入:训练数据 T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x N , y N ) } T = \{(x_1,y_1),(x_2,y_2),...,(x_N,y_N)\} T={(x1,y1),(x2,y2),...,(xN,yN)}
输出:实例 x x x 的分类
(1)计算先验概率及条件概率:
P ( Y = c k ) = ∑ i = 1 N I ( y i = c k ) N , k = 1 , 2 , . . . , K P ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( x i ( j ) = a j l , y i = c k ) ∑ i = 1 N I ( y i = c k ) j = 1 , 2 , . . . , n ; l = 1 , 2 , . . . , S j ; k = 1 , 2 , . . . , K \begin{aligned} & P(Y=c_{k}) = \frac{\sum_{i=1}^NI(y_i=c_k)}{N}, k=1,2,...,K \\ & P(X^{(j)}=a_{jl}|Y=c_k)=\frac{\sum_{i=1}^NI(x_i^{(j)}=a_{jl},y_i=c_k)}{\sum_{i=1}^NI(y_i=c_k)} \\ & j=1,2,...,n; l=1,2,...,S_j;k=1,2,...,K & \\ \end{aligned} P(Y=ck)=Ni=1NI(yi=ck),k=1,2,...,KP(X(j)=ajlY=ck)=i=1NI(yi=ck)i=1NI(xi(j)=ajl,yi=ck)j=1,2,...,n;l=1,2,...,Sj;k=1,2,...,K
(2)对于给定的实例 x = ( x ( 1 ) , x ( 2 ) , , . . . x ( n ) ) T x = (x^{(1)},x^{(2)},,...x^{(n)})^T x=(x(1),x(2),,...x(n))T,计算:
P ( Y = c k ) ∏ j = 1 n P ( X ( j ) = x ( j ) ∣ Y = c k ) , k = 1 , 2 , . . . , K \begin{aligned} P(Y = c_k)\prod_{j=1}^nP(X^{(j)}=x^{(j)}|Y = c_k),k=1,2,...,K \end{aligned} P(Y=ck)j=1nP(X(j)=x(j)Y=ck)k=1,2,...,K
(3)确定实例x的类:
y = arg ⁡ max ⁡ c k P ( Y = c k ) ∏ j = 1 n P ( X ( j ) = x ( j ) ∣ Y = c k ) \begin{aligned} y = \mathop{\arg\max}\limits_{c_k}P(Y = c_k)\prod_{j=1}^nP(X^{(j)}=x^{(j)}|Y = c_k) \end{aligned} y=ckargmaxP(Y=ck)j=1nP(X(j)=x(j)Y=ck)

latex打的真酸爽

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值