逻辑回归:模型构建、估计参数求解、结果解读笔记

禁止转载,谢谢

一、广义线性模型角度理解逻辑回归

1.对数几率模型(logit model)

  • 几率(odd)与对数几率
      几率不是概率,而是一个事件发生与不发生的概率的比值。假设某事件发生的概率为p,则该事件不发生的概率为1-p,该事件的几率为:
    o d d ( p ) = p 1 − p odd(p)=\frac{p}{1-p} odd(p)=1pp
    在几率的基础上取(自然底数的)对数,则构成该事件的对数几率(logit):
    l o g i t ( p ) = l n p 1 − p logit(p) = ln\frac{p}{1-p} logit(p)=ln1pp
  • 对数几率模型
      如果我们将对数几率看成是一个函数,并将其作为联系函数,即 g ( y ) = l n y 1 − y g(y)=ln\frac{y}{1-y} g(y)=ln1yy,则该广义线性模型为:
    g ( y ) = l n y 1 − y = w ^ T ⋅ x ^ g(y)=ln\frac{y}{1-y}=\hat w^T \cdot \hat x g(y)=ln1yy=w^Tx^

此时模型就被称为对数几率回归(logistic regression),也被称为逻辑回归。

2.逻辑回归与Sigmoid函数

  • 对数几率函数与Sigmoid函数

    “反解”对数几率函数,改写为 𝑦=𝑓(𝑥) 形式:

    方程左右两端取自然底数:
    y 1 − y = e w ^ T ⋅ x ^ \frac{y}{1-y}=e^{\hat w^T \cdot \hat x} 1yy=ew^Tx^
    方程左右两端+1可得:
    y + ( 1 − y ) 1 − y = 1 1 − y = e w ^ T ⋅ x ^ + 1 \frac{y+(1-y)}{1-y}=\frac{1}{1-y}=e^{\hat w^T \cdot \hat x}+1 1yy+(1y)=1y1=ew^Tx^+1
    方程左右两端取倒数可得:
    1 − y = 1 e w ^ T ⋅ x ^ + 1 1-y=\frac{1}{e^{\hat w^T \cdot \hat x}+1} 1y=ew^Tx^+11
    1-方程左右两端可得:
    y = 1 − 1 e w ^ T ⋅ x ^ + 1 = e w ^ T ⋅ x ^ e w ^ T ⋅ x ^ + 1 = 1 1 + e − ( w ^ T ⋅ x ^ ) = g − 1 ( w ^ T ⋅ x ^ ) \begin{aligned} y &= 1-\frac{1}{e^{\hat w^T \cdot \hat x}+1}\\ &=\frac{e^{\hat w^T \cdot \hat x}}{e^{\hat w^T \cdot \hat x}+1} \\ &=\frac{1}{1+e^{-(\hat w^T \cdot \hat x)}} = g^{-1}(\hat w^T \cdot \hat x) \end{aligned} y=1ew^Tx^+11=ew^Tx^+1ew^Tx^=1+e(w^Tx^)1=g1(w^Tx^)

    因此,逻辑回归基本模型方程为
    y = 1 1 + e − ( w ^ T ⋅ x ^ ) y = \frac{1}{1+e^{-(\hat w^T \cdot \hat x)}} y=1+e(w^Tx^)1
    同时我们也能发现,对数几率函数的反函数(Sigmoid函数)为:
    f ( x ) = 1 1 + e − x f(x) = \frac{1}{1+e^{-x}} f(x)=1+ex1

3.Sigmoid函数性质

sigmoid函数性质如下:

性质说明
单调性单调递增
变化率0点变化率最大,越远离0点变化率越小
取值范围(0,1)
凹凸性0点为函数拐点,0点之前函数为凸函数,此后函数为凹函数

该函数的函数图像:
在这里插入图片描述

令:
S i g m o i d ( x ) = 1 1 + e − x Sigmoid(x) = \frac{1}{1+e^{-x}} Sigmoid(x)=1+ex1
对其求导可得:
S i g m o i d ′ ( x ) = ( 1 1 + e − x ) ′ = ( ( 1 + e − x ) − 1 ) ′ = ( − 1 ) ( 1 + e − x ) − 2 ⋅ ( e − x ) ′ = ( 1 + e − x ) − 2 ( e − x ) = e − x ( 1 + e − x ) 2 = e − x + 1 − 1 ( 1 + e − x ) 2 = 1 1 + e − x − 1 ( 1 + e − x ) 2 = 1 1 + e − x ( 1 − 1 1 + e − x ) = S i g m o i d ( x ) ( 1 − S i g m o i d ( x ) ) \begin{aligned} Sigmoid'(x) &= (\frac{1}{1+e^{-x}})' \\ &=((1+e^{-x})^{-1})' \\ &=(-1)(1+e^{-x})^{-2} \cdot (e^{-x})' \\ &=(1+e^{-x})^{-2}(e^{-x}) \\ &=\frac{e^{-x}}{(1+e^{-x})^{2}} \\ &=\frac{e^{-x}+1-1}{(1+e^{-x})^{2}} \\ &=\frac{1}{1+e^{-x}} - \frac{1}{(1+e^{-x})^2} \\ &=\frac{1}{1+e^{-x}}(1-\frac{1}{1+e^{-x}}) \\ &=Sigmoid(x)(1-Sigmoid(x)) \end{aligned} Sigmoid(x)=(1+ex1)=((1+ex)1)=(1)(1+ex)2(ex)=(1+ex)2(ex)=(1+ex)2ex=(1+ex)2ex+11=1+ex1(1+ex)21=1+ex1(11+ex1)=Sigmoid(x)(1Sigmoid(x))
Sigmoid导函数图像
在这里插入图片描述
  Sigmoid导函数在实数域上取值大于0,并且函数图像先递增后递减,并在0点取得最大值。

  如果从简单探索Sigmoid函数的二阶导函数,其实能够发现,x<0时二阶导函数取值大于0(一阶导函数递增),而x>0时二阶导函数小于0(一阶导函数递减)。因此0点其实也是sigmoid函数的拐点。

二、 逻辑回归参数估计:极大似然估计、相对熵与交叉熵损失函数

根据逻辑回归的基本公式:
y = 1 1 + e − ( w ^ T ⋅ x ^ ) y = \frac{1}{1+e^{-(\hat w^T \cdot \hat x)}} y=1+e(w^Tx^)1

1、逻辑回归参数估计基本思路

根据损失函数

  现有简单数据集如下:

xy
10
31

由于只有一个特征,因此可以构建逻辑回归模型为:
y = s i g m o i d ( w x + b ) = 1 1 + e − ( w x + b ) y=sigmoid(wx+b)=\frac{1}{1+e^{-(wx+b)}} y=sigmoid(wx+b)=1+e(wx+b)1

将模型输出结果视作概率,则分别带入两条数据可得模型输出结果为:
p ( y = 1 ∣ x = 1 ) = 1 1 + e − ( w + b ) p(y=1|x=1)=\frac{1}{1+e^{-(w+b)}} p(y=1x=1)=1+e(w+b)1
p ( y = 1 ∣ x = 3 ) = 1 1 + e − ( 3 w + b ) p(y=1|x=3)=\frac{1}{1+e^{-(3w+b)}} p(y=1x=3)=1+e(3w+b)1

xy1-predict0-predict
10 1 1 + e − ( w + b ) \frac{1}{1+e^{-(w+b)}} 1+e(w+b)1 e − ( w + b ) 1 + e − ( w + b ) \frac{e^{-(w+b)}}{1+e^{-(w+b)}} 1+e(w+b)e(w+b)
31 1 1 + e − ( 3 w + b ) \frac{1}{1+e^{-(3w+b)}} 1+e(3w+b)1 e − ( 3 w + b ) 1 + e − ( 3 w + b ) \frac{e^{-(3w+b)}}{1+e^{-(3w+b)}} 1+e(3w+b)e(3w+b)

希望模型预测结果尽可能准确,就等价于希望 p ( y = 0 ∣ x = 1 ) p(y=0|x=1) p(y=0x=1) p ( y = 1 ∣ x = 1 ) p(y=1|x=1) p(y=1x=1)两个概率结果越大越好,考虑到损失函数一般都是求最小值,求最大值转化为对应负数结果求最小值,同时累乘也可以转化为对数相加结果:

L o g i t L o s s ( w , b ) = − l n ( p ( y = 1 ∣ x = 3 ) ) − l n ( p ( y = 0 ∣ x = 1 ) ) = − l n ( 1 1 + e − ( 3 w + b ) ) − l n ( e − ( w + b ) 1 + e − ( w + b ) ) = l n ( 1 + e − ( 3 w + b ) ) + l n ( 1 + 1 e − ( w + b ) ) = l n ( 1 + e − ( 3 w + b ) + e ( w + b ) + e − 2 w ) \begin{aligned} LogitLoss(w, b)&=-ln(p(y=1|x=3))-ln(p(y=0|x=1)) \\ &=-ln(\frac{1}{1+e^{-(3w+b)}})- ln(\frac{e^{-(w+b)}}{1+e^{-(w+b)}}) \\ &=ln(1+e^{-(3w+b)})+ln(1+\frac{1}{e^{-(w+b)}}) \\ &=ln(1+e^{-(3w+b)}+e^{(w+b)}+e^{-2w}) \end{aligned} LogitLoss(w,b)=ln(p(y=1x=3))ln(p(y=0x=1))=ln(1+e(3w+b)1)ln(1+e(w+b)e(w+b))=ln(1+e(3w+b))+ln(1+e(w+b)1)=ln(1+e(3w+b)+e(w+b)+e2w)

  • 为何不能采用类似SSE的计算思路取构建损失函数,即进行如下运算: ∣ ∣ y − y h a t ∣ ∣ 2 2 = ∣ ∣ y − 1 1 + e − ( w ^ T ⋅ x ^ ) ∣ ∣ 2 2 ||y-yhat||_2^2=||y-\frac{1}{1+e^{-(\hat w^T \cdot \hat x)}}||_2^2 yyhat22=y1+e(w^Tx^)122

  一般不会采用该方法构建损失函数,其根本原因在于,在数学层面上我们可以证明,对于逻辑回归,当y属于0-1分类变量时, ∣ ∣ y − y h a t ∣ ∣ 2 2 ||y-yhat||_2^2 yyhat22损失函数并不是凸函数,而非凸的损失函数将对后续参数最优解求解造成很大麻烦。而相比之下,概率连乘所构建的损失函数是凸函数,可以快速求解出全域最小值。

  其二,在构建损失函数的过程中,我们需要将概率连乘改为对数累加,有一个很重要的原因是,在实际建模运算过程中,尤其是面对大量数据进行损失函数构建过程中,由于有多少条数据就要进行多少次累乘,而累乘的因子又是介于(0,1)之间的数,因此极有可能累乘得到一个非常小的数,而通用的计算框架计算精度有限,即有可能在累乘的过程中损失大量精度,而转化为对数累加之后能够很好的避免该问题的发生。

2、利用极大似然估计进行参数估计

逻辑回归模型: y = 1 1 + e − ( w ^ T ⋅ x ^ ) y = \frac{1}{1+e^{-(\hat w^T \cdot \hat x)}} y=1+e(w^Tx^)1
其中: w ^ = [ w 1 , w 2 , . . . w d , b ] T , x ^ = [ x 1 , x 2 , . . . x d , 1 ] T \hat w = [w_1,w_2,...w_d, b]^T, \hat x = [x_1,x_2,...x_d, 1]^T w^=[w1,w2,...wd,b]T,x^=[x1,x2,...xd,1]T

求解过程总共分为四个步骤,分别是:

  • Step 1.确定似然项

    p 1 ( x ^ ; w ^ ) = p ( y = 1 ∣ x ^ ; w ^ ) p_1(\hat x;\hat w)=p(y=1|\hat x;\hat w) p1(x^;w^)=p(y=1x^;w^)
    p 0 ( x ^ ; w ^ ) = 1 − p ( y = 1 ∣ x ^ ; w ^ ) p_0(\hat x;\hat w)=1-p(y=1|\hat x;\hat w) p0(x^;w^)=1p(y=1x^;w^)
    因此,第 i i i个数据所对应的似然项可以写成:
    p 1 ( x ^ ; w ^ ) y i ⋅ p 0 ( x ^ ; w ^ ) ( 1 − y i ) p_1(\hat x;\hat w)^{y_i} \cdot p_0(\hat x;\hat w)^{(1-y_i)} p1(x^;w^)yip0(x^;w^)(1yi)

  • Step 2.构建似然函数,通过似然项的累乘计算极大似然函数:
    ∏ i = 1 N [ p 1 ( x ^ ; w ^ ) y i ⋅ p 0 ( x ^ ; w ^ ) ( 1 − y i ) ] \prod^N_{i=1}[p_1(\hat x;\hat w)^{y_i} \cdot p_0(\hat x;\hat w)^{(1-y_i)}] i=1N[p1(x^;w^)yip0(x^;w^)(1yi)]

  • Step 3.进行对数转换,为了方便后续利用优化方法求解最小值:
    L ( w ^ ) = − l n ( ∏ i = 1 N [ p 1 ( x ^ ; w ^ ) y i ⋅ p 0 ( x ^ ; w ^ ) ( 1 − y i ) ] ) = ∑ i = 1 N [ − y i ⋅ l n ( p 1 ( x ^ ; w ^ ) ) − ( 1 − y i ) ⋅ l n ( p 0 ( x ^ ; w ^ ) ) ] = ∑ i = 1 N [ − y i ⋅ l n ( p 1 ( x ^ ; w ^ ) ) − ( 1 − y i ) ⋅ l n ( 1 − p 1 ( x ^ ; w ^ ) ) ] \begin{aligned} L(\hat w) &= -ln(\prod^N_{i=1}[p_1(\hat x;\hat w)^{y_i} \cdot p_0(\hat x;\hat w)^{(1-y_i)}]) \\ &= \sum^N_{i=1}[-y_i \cdot ln(p_1(\hat x;\hat w))-(1-y_i) \cdot ln(p_0(\hat x;\hat w))] \\ &= \sum^N_{i=1}[-y_i \cdot ln(p_1(\hat x;\hat w))-(1-y_i) \cdot ln(1-p_1(\hat x;\hat w))] \end{aligned} L(w^)=ln(i=1N[p1(x^;w^)yip0(x^;w^)(1yi)])=i=1N[yiln(p1(x^;w^))(1yi)ln(p0(x^;w^))]=i=1N[yiln(p1(x^;w^))(1yi)ln(1p1(x^;w^))]

  • Step 4.求解对数似然函数
      通过一系列数学过程可以证明,通过极大似然估计构建的损失函数是凸函数,此时我们可以采用导数为0联立方程组的方式进行求解,这也是极大似然估计对参数求解的一般方法。但这种方法会涉及大量的导数运算、方程组求解等,并不适用于大规模甚至是超大规模数值运算,因此,在机器学习领域,我们通常会采用一些更加通用的优化方法对逻辑回归的损失函数进行求解,通常来说是牛顿法或者梯度下降算法。

3、熵、相对熵与交叉熵

另一种构建逻辑回归损失函数的基本思路——借助相对熵(relative entropy,又称KL离散度)构建损失函数。

3.1 熵(entropy)的基本概念与计算公式

  通常用熵(entropy)来表示随机变量不确定性的度量,或者说系统混乱程度、信息混乱程度。熵的计算公式如下:
H ( X ) = − ∑ i = 1 n p ( x i ) l o g ( p ( x i ) ) H(X) = -\sum^n_{i=1}p(x_i)log(p(x_i)) H(X)=i=1np(xi)log(p(xi))

信息为什么还有单位,熵为什么用log来计算?

其中, p ( x i ) p(x_i) p(xi)表示多分类问题中第 i i i个类别出现的概率, n n n表示类别总数,通常来说信息熵的计算都取底数为2,并且规定 l o g 0 = 0 log0=0 log0=0。举例说明信息熵计算过程,假设有二分类数据集1标签如下:

indexlabels
10
21
31
41

则信息熵的计算过程中 n = 2 n=2 n=2,令 p ( x 1 ) p(x_1) p(x1)表示类别0的概率, p ( x 2 ) p(x_2) p(x2)表示类别1的概率(反之亦然),则 p ( x 1 ) = 1 4 p(x_1)=\frac{1}{4} p(x1)=41
p ( x 2 ) = 3 4 p(x_2)=\frac{3}{4} p(x2)=43
则该数据集的信息熵计算结果如下:
H ( X ) = − ( p ( x 1 ) l o g ( p ( x 1 ) ) + p ( x 2 ) l o g ( p ( x 2 ) ) ) = − ( 1 4 ) l o g ( 1 4 ) − ( 3 4 ) l o g ( 3 4 ) = 0.81 \begin{aligned} H(X) &= -(p(x_1)log(p(x_1))+p(x_2)log(p(x_2))) \\ &=-(\frac{1}{4})log(\frac{1}{4})-(\frac{3}{4})log(\frac{3}{4})\\ &=0.81 \end{aligned} H(X)=(p(x1)log(p(x1))+p(x2)log(p(x2)))=(41)log(41)(43)log(43)=0.81

3.2 熵的基本性质

  可以证明,熵的计算结果在[0,1]之间,并且熵值越大,系统越混乱、信息越混乱。例如,有如下两个数据集,其中数据集总共4条样本,0、1类各占50%。

indexlabels
10
21
30
41

对于该数据集,我们可以计算信息熵为
H 1 ( X ) = − ( 1 2 ) l o g ( 1 2 ) − ( 1 2 ) l o g ( 1 2 ) = 1 H_1(X) = -(\frac{1}{2})log(\frac{1}{2})-(\frac{1}{2})log(\frac{1}{2})=1 H1(X)=(21)log(21)(21)log(21)=1
此时信息熵达到最高值。

再如

indexlabels
11
21
31
41

信息熵计算可得:
H 1 ( X ) = − ( 4 4 ) l o g ( 4 4 ) − ( 0 4 ) l o g ( 0 4 ) = 0 H_1(X) = -(\frac{4}{4})log(\frac{4}{4})-(\frac{0}{4})log(\frac{0}{4})=0 H1(X)=(44)log(44)(40)log(40)=0
此时信息熵取得最小值,也就代表标签的取值整体呈现非常确定的状态,系统信息规整。

  结合上述三例子,不难看出,当标签取值均时信息熵较高,标签取值纯度较高时信息熵较低。

3.3 相对熵(relative entropy)与交叉熵(cross entropy)

  相对熵也被称为Kullback-Leibler散度(KL散度)或者信息散度(information divergence)。通常用来衡量两个随机变量分布的差异性。假设对同一个随机变量X,有两个单独的概率分布P(x)和Q(x),当X是离散变量时,我们可以通过如下相对熵计算公式来衡量二者差异:
D K L ( P ∣ ∣ Q ) = ∑ i = 1 n P ( x i ) l o g ( P ( x i ) Q ( x i ) ) D_{KL}(P||Q)=\sum ^n_{i=1}P(x_i)log(\frac{P(x_i)}{Q(x_i)}) DKL(PQ)=i=1nP(xi)log(Q(xi)P(xi))
和信息熵类似,相对熵越小,代表Q(x)和P(x)越接近。
  从交叉熵的计算公式不难看出,这其实是一种非对称性度量,也就是 D K L ( P ∣ ∣ Q ) ≠ D K L ( Q ∣ ∣ P ) D_{KL}(P||Q)≠D_{KL}(Q||P) DKL(PQ)=DKL(QP)。从本质上来说,相对熵刻画的是用概率分布Q来刻画概率分布P的困难程度,而在机器学习领域,我们一般令Q为模型输出结果,而P为数据集标签真实结果,以此来判断模型输出结果是否足够接近真实情况。

Q为拟合分布P为真实分布,也被称为前向KL散度(forward KL divergence)。

上述相对熵公式等价于:

D K L ( P ∣ ∣ Q ) = ∑ i = 1 n P ( x i ) l o g ( P ( x i ) Q ( x i ) ) = ∑ i = 1 n P ( x i ) l o g ( P ( x i ) ) − ∑ i = 1 n P ( x i ) l o g ( Q ( x i ) ) = − H ( P ( x ) ) + [ − ∑ i = 1 n P ( x i ) l o g ( Q ( x i ) ) ] \begin{aligned} D_{KL}(P||Q)&=\sum ^n_{i=1}P(x_i)log(\frac{P(x_i)}{Q(x_i)}) \\ &=\sum ^n_{i=1}P(x_i)log(P(x_i))-\sum ^n_{i=1}P(x_i)log(Q(x_i)) \\ &=-H(P(x))+[-\sum ^n_{i=1}P(x_i)log(Q(x_i))] \end{aligned} DKL(PQ)=i=1nP(xi)log(Q(xi)P(xi))=i=1nP(xi)log(P(xi))i=1nP(xi)log(Q(xi))=H(P(x))+[i=1nP(xi)log(Q(xi))]
  而对于给定数据集,信息熵 H ( P ( X ) ) H(P(X)) H(P(X))是确定的,因此相对熵的大小完全由 − ∑ i = 1 n P ( x i ) l o g ( Q ( x i ) ) -\sum ^n_{i=1}P(x_i)log(Q(x_i)) i=1nP(xi)log(Q(xi))决定。而该式计算结果也被称为交叉熵(cross entropy)计算。
c r o s s _ e n t r o p y ( P , Q ) = − ∑ i = 1 n P ( x i ) l o g ( Q ( x i ) ) cross\_entropy(P,Q) = -\sum ^n_{i=1}P(x_i)log(Q(x_i)) cross_entropy(P,Q)=i=1nP(xi)log(Q(xi))
  因此,如果我们希望P、Q二者分布尽可能接近,我们就需要尽可能减少相对熵,但由于相对熵=交叉熵-信息熵,因此我们只能力求减少交叉熵。当然,也正因如此,交叉熵可以作为衡量模型输出分布是否接近真实分布的重要度量方法。

根据吉布斯不等式,相对熵的取值恒大于等于零,当预测分布和真实分布完全一致时相对熵取值为0,此时交叉熵等于数据信息熵,此外只要二者分布不一致,交叉熵的取值都将大于信息熵。

交叉熵损失函数
  • 单样本交叉熵计算,例如有数据集情况如下:
xlabelspredicts
110.8
200.3
300.4
410.7

将其改写成如下形式:

xA类B类
101
predicts0.20.8
210
predicts0.70.3
310
predicts0.60.4
401
predicts0.30.7

其中A、B表示每条样本可能所属的类别。围绕该数据集,第一条数据的交叉熵计算过程如下:
c r o s s _ e n t r o p y = − 0 ∗ l o g ( 0.2 ) − 1 ∗ l o g ( 0.8 ) = 0.32 cross\_entropy = -0 * log(0.2)-1*log(0.8)=0.32 cross_entropy=0log(0.2)1log(0.8)=0.32

  • 多样本交叉熵计算,整体交叉熵实际上是每条数据交叉熵的均值:
    − 1 ∗ l o g ( 0.8 ) − 1 ∗ l o g ( 0.7 ) − 1 ∗ l o g ( 0.6 ) − 1 ∗ l o g ( 0.7 ) 4 = 0.52 \frac{-1 * log(0.8) -1 * log(0.7) -1 * log(0.6) -1 * log(0.7)}{4} =0.52 41log(0.8)1log(0.7)1log(0.6)1log(0.7)=0.52

  据此,我们可以给出多样本交叉熵计算公式如下:
c r o s s _ e n t r o p y = − 1 m ∑ j m ∑ i n p ( p i j ) l o g ( q i j ) cross\_entropy = -\frac{1}{m}\sum ^m_j \sum^n_ip(p_{ij})log(q_{ij}) cross_entropy=m1jminp(pij)log(qij)

其中m为数据量,n为类别数量。

  • 对比极大似然估计函数
    L ( w ^ ) = ∑ i = 1 N [ − y i ⋅ l n ( p 1 ( x ^ ; w ^ ) ) − ( 1 − y i ) ⋅ l n ( 1 − p 1 ( x ^ ; w ^ ) ) ] L(\hat w)= \sum^N_{i=1}[-y_i \cdot ln(p_1(\hat x;\hat w))-(1-y_i) \cdot ln(1-p_1(\hat x;\hat w))] L(w^)=i=1N[yiln(p1(x^;w^))(1yi)ln(1p1(x^;w^))]

带入数据可得:
− l n ( 0.8 ) − l n ( 0.7 ) − l n ( 0.6 ) − l n ( 0.7 ) = 1.44 -ln(0.8)-ln(0.7)-ln(0.6)-ln(0.7) = 1.44 ln(0.8)ln(0.7)ln(0.6)ln(0.7)=1.44

尽管具体数值计算结果有所差异,但基本流程都是类似的——取类别1的概率的对数运算结果进行累加再取负数。因此在实际建模过程中,考虑采用极大似然估计构建损失函数,和采用交叉熵构建损失函数,效果是相同的,二者构建的损失函数都能很好的描绘模型预测结果和真实结果的差异程度。不过在机器学习领域,一般以交叉熵损失函数为主。

  • 二分类交叉熵损失函数
      据此,我们也可最终推导二分类交叉熵损失函数计算公式,结合极大似然估计的计算公式和交叉熵的基本计算流程,二分类交叉熵损失函数为:
    b i n a r y C E ( w ^ ) = − 1 m ∑ i = 1 m [ y i ⋅ l o g ( p 1 ( x ^ ; w ^ ) ) + ( 1 − y i ) ⋅ l o g ( 1 − p 1 ( x ^ ; w ^ ) ) ] binaryCE(\hat w)= -\frac{1}{m}\sum^m_{i=1}[y_i \cdot log(p_1(\hat x;\hat w))+(1-y_i) \cdot log(1-p_1(\hat x;\hat w))] binaryCE(w^)=m1i=1m[yilog(p1(x^;w^))+(1yi)log(1p1(x^;w^))]
逻辑回归损失函数的梯度计算表达式

对于上述BCE损失函数,梯度表达式为:
∇ w ^ B C E ( w ^ ) = ∂ B C E ( w ^ ) ∂ w ^ \nabla _{\hat w} BCE(\hat w) = \frac{\partial BCE(\hat w)}{\partial \hat w} w^BCE(w^)=w^BCE(w^)
其中 w ^ = [ w 1 , w 2 , w 3 , . . . , w n , b ] \hat w=[w_1, w_2, w_3, ..., w_n, b] w^=[w1,w2,w3,...,wn,b],因此上式可进一步展开:
∇ w ^ B C E ( w ^ ) = ∂ B C E ( w ^ ) ∂ w ^ = [ ∂ B C E ( w ^ ) ∂ w 1 , ∂ B C E ( w ^ ) ∂ w 2 , . . . , ∂ B C E ( w ^ ) ∂ w n , ∂ B C E ( w ^ ) ∂ b ] T \nabla _{\hat w} BCE(\hat w) = \frac{\partial BCE(\hat w)}{\partial \hat w}= [\frac{\partial BCE(\hat w)}{\partial w_1}, \frac{\partial BCE(\hat w)}{\partial w_2}, ..., \frac{\partial BCE(\hat w)}{\partial w_n}, \frac{\partial BCE(\hat w)}{\partial b}]^T w^BCE(w^)=w^BCE(w^)=[w1BCE(w^),w2BCE(w^),...,wnBCE(w^),bBCE(w^)]T

l o g ( p 1 ( x ^ ( i ) ; w ^ ) ) = l o g ( S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ) log(p_1(\hat x^{(i)}; \hat w)) = log(Sigmoid(\hat x^{(i)} \cdot \hat w)) log(p1(x^(i);w^))=log(Sigmoid(x^(i)w^)),进行求导可得:
l o g ( p 1 ( x ^ ( i ) ; w ^ ) ) ′ = l o g ( S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ) ′ = 1 S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ⋅ S i g m o i d ′ ( x ^ ( i ) ⋅ w ^ ) log(p_1(\hat x^{(i)}; \hat w))' =log(Sigmoid(\hat x^{(i)} \cdot \hat w))'=\frac{1}{Sigmoid(\hat x^{(i)} \cdot \hat w)} \cdot Sigmoid'(\hat x^{(i)} \cdot \hat w) log(p1(x^(i);w^))=log(Sigmoid(x^(i)w^))=Sigmoid(x^(i)w^)1Sigmoid(x^(i)w^)
对于Sigmoid函数来说,有 S i g m o i d ′ ( x ) = S i g m o i d ( x ) ( 1 − S i g m o i d ( x ) ) Sigmoid'(x) = Sigmoid(x)(1-Sigmoid(x)) Sigmoid(x)=Sigmoid(x)(1Sigmoid(x)),因此上式可进一步得到:
l o g ( p 1 ( x ^ ( i ) ; w ^ ) ) ′ = ( 1 − S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ) ⋅ ( x ^ ( i ) ⋅ w ^ ) ′ log(p_1(\hat x^{(i)}; \hat w))' = (1-Sigmoid(\hat x^{(i)} \cdot \hat w)) \cdot (\hat x^{(i)} \cdot \hat w)' log(p1(x^(i);w^))=(1Sigmoid(x^(i)w^))(x^(i)w^)
因此,在 l o g ( p 1 ( x ^ ( i ) ; w ^ ) ) log(p_1(\hat x^{(i)}; \hat w)) log(p1(x^(i);w^))中对 w i w_i wi求导可得:
l o g ( p 1 ( x ^ ( i ) ; w ^ ) ) ′ = ( 1 − S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ) ⋅ x i ( i ) log(p_1(\hat x^{(i)}; \hat w))' = (1-Sigmoid(\hat x^{(i)} \cdot \hat w)) \cdot x_i^{(i)} log(p1(x^(i);w^))=(1Sigmoid(x^(i)w^))xi(i)
而在 l o g ( p 1 ( x ^ i ; w ^ ) ) log(p_1(\hat x_i; \hat w)) log(p1(x^i;w^))中对 b b b求导可得:
l o g ( p 1 ( x ^ ( i ) ; w ^ ) ) ′ = ( 1 − S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ) ⋅ 1 log(p_1(\hat x^{(i)}; \hat w))' = (1-Sigmoid(\hat x^{(i)} \cdot \hat w)) \cdot 1 log(p1(x^(i);w^))=(1Sigmoid(x^(i)w^))1

( 1 − y ( i ) ) ⋅ l o g ( 1 − p 1 ( x ^ ( i ) ; w ^ ) ) (1-y^{(i)}) \cdot log(1-p_1(\hat x^{(i)};\hat w)) (1y(i))log(1p1(x^(i);w^))中对 w i w_i wi求导可得:
( 1 − y ( i ) ) ⋅ l o g ′ ( 1 − p 1 ( x ^ ( i ) ; w ^ ) ) = ( 1 − y ( i ) ) ⋅ l o g ′ ( 1 − S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ) = ( 1 − y ( i ) ) ⋅ 1 ( 1 − S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ) ⋅ ( − S i g m o i d ′ ( x ^ ( i ) ⋅ w ^ ) ) = ( 1 − y ( i ) ) ⋅ ( − S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ) ⋅ ( x ^ ( i ) ⋅ w ^ ) ′ \begin{aligned} (1-y^{(i)}) \cdot log'(1-p_1(\hat x^{(i)};\hat w)) & = (1-y^{(i)}) \cdot log'(1-Sigmoid(\hat x^{(i)} \cdot \hat w)) \\ &=(1-y^{(i)}) \cdot \frac{1}{(1-Sigmoid(\hat x^{(i)} \cdot \hat w))} \cdot (-Sigmoid'(\hat x^{(i)} \cdot \hat w))\\ & = (1-y^{(i)}) \cdot (-Sigmoid(\hat x^{(i)} \cdot \hat w)) \cdot (\hat x^{(i)} \cdot \hat w)' \end{aligned} (1y(i))log(1p1(x^(i);w^))=(1y(i))log(1Sigmoid(x^(i)w^))=(1y(i))(1Sigmoid(x^(i)w^))1(Sigmoid(x^(i)w^))=(1y(i))(Sigmoid(x^(i)w^))(x^(i)w^)
当对截距项w求导时:
( 1 − y ( i ) ) ⋅ l o g ′ ( 1 − p 1 ( x ^ ( i ) ; w ^ ) ) = ( 1 − y ( i ) ) ⋅ ( − S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ) ⋅ x i ( i ) (1-y^{(i)}) \cdot log'(1-p_1(\hat x^{(i)};\hat w))=(1-y^{(i)}) \cdot (-Sigmoid(\hat x^{(i)} \cdot \hat w)) \cdot x_i^{(i)} (1y(i))log(1p1(x^(i);w^))=(1y(i))(Sigmoid(x^(i)w^))xi(i)
当对截距项b求导时:
( 1 − y ( i ) ) ⋅ l o g ′ ( 1 − p 1 ( x ^ ( i ) ; w ^ ) ) = ( 1 − y ( i ) ) ⋅ ( − S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ) ⋅ 1 (1-y^{(i)}) \cdot log'(1-p_1(\hat x^{(i)};\hat w))= (1-y^{(i)}) \cdot (-Sigmoid(\hat x^{(i)} \cdot \hat w)) \cdot 1 (1y(i))log(1p1(x^(i);w^))=(1y(i))(Sigmoid(x^(i)w^))1

因此:
∂ B C E ( w ^ ) ∂ w i = ( − 1 m ∑ i = 1 m [ y ( i ) ⋅ l o g ( p 1 ( x ^ ( i ) ; w ^ ) ) + ( 1 − y ( i ) ) ⋅ l o g ( 1 − p 1 ( x ^ ( i ) ; w ^ ) ) ] ) ′ = − 1 m ∑ i = 1 m [ y ( i ) ⋅ l o g ′ ( p 1 ( x ^ ( i ) ; w ^ ) ) + ( 1 − y ( i ) ) ⋅ l o g ′ ( 1 − p 1 ( x ^ ( i ) ; w ^ ) ) ] = − 1 m ∑ i = 1 m [ y ( i ) ⋅ ( 1 − S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ) ⋅ x i ( i ) + ( 1 − y ( i ) ) ⋅ ( − S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ) ⋅ x i ( i ) ) ] = − 1 m ∑ i = 1 m [ x i ( i ) y ( i ) − x i ( i ) y ( i ) S i g m o i d ( x ^ ( i ) ⋅ w ^ ) − x i ( i ) S i g m o i d ( x ^ ( i ) ⋅ w ^ ) + x i ( i ) y ( i ) S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ] = 1 m ∑ i = 1 m ( x i ( i ) ( S i g m o i d ( x ^ ( i ) ⋅ w ^ ) − y ( i ) ) ) = 1 m ∑ i = 1 m ( x i ( i ) ( y ^ ( i ) − y ( i ) ) ) \begin{aligned} \frac{\partial BCE(\hat w)}{\partial w_i} &= (-\frac{1}{m}\sum^m_{i=1}[y^{(i)} \cdot log(p_1(\hat x^{(i)};\hat w))+(1-y^{(i)}) \cdot log(1-p_1(\hat x^{(i)};\hat w))])' \\ &=-\frac{1}{m}\sum^m_{i=1}[y^{(i)} \cdot log'(p_1(\hat x^{(i)};\hat w))+(1-y^{(i)}) \cdot log'(1-p_1(\hat x^{(i)};\hat w))] \\ &=-\frac{1}{m}\sum^m_{i=1}[y^{(i)} \cdot (1-Sigmoid(\hat x^{(i)} \cdot \hat w)) \cdot x_i^{(i)} + (1-y^{(i)}) \cdot (-Sigmoid(\hat x^{(i)} \cdot \hat w)) \cdot x_i^{(i)})] \\ &=-\frac{1}{m}\sum^m_{i=1}[ x_i^{(i)}y^{(i)}-x_i^{(i)}y^{(i)}Sigmoid(\hat x^{(i)} \cdot \hat w) - x_i^{(i)}Sigmoid(\hat x^{(i)} \cdot \hat w)+x_i^{(i)}y^{(i)}Sigmoid(\hat x^{(i)} \cdot \hat w) ] \\ &=\frac{1}{m}\sum^m_{i=1}(x_i^{(i)}(Sigmoid(\hat x^{(i)} \cdot \hat w)-y^{(i)})) \\ &=\frac{1}{m}\sum^m_{i=1}(x_i^{(i)}(\hat y^{(i)}-y^{(i)})) \end{aligned} wiBCE(w^)=(m1i=1m[y(i)log(p1(x^(i);w^))+(1y(i))log(1p1(x^(i);w^))])=m1i=1m[y(i)log(p1(x^(i);w^))+(1y(i))log(1p1(x^(i);w^))]=m1i=1m[y(i)(1Sigmoid(x^(i)w^))xi(i)+(1y(i))(Sigmoid(x^(i)w^))xi(i))]=m1i=1m[xi(i)y(i)xi(i)y(i)Sigmoid(x^(i)w^)xi(i)Sigmoid(x^(i)w^)+xi(i)y(i)Sigmoid(x^(i)w^)]=m1i=1m(xi(i)(Sigmoid(x^(i)w^)y(i)))=m1i=1m(xi(i)(y^(i)y(i)))

对截距项b进行求导时:
∂ B C E ( w ^ ) ∂ b = 1 m ∑ i = 1 m 1 ⋅ ( y ^ ( i ) − y ( i ) ) \frac{\partial BCE(\hat w)}{\partial b} = \frac{1}{m}\sum^m_{i=1}1 \cdot (\hat y^{(i)}-y^{(i)}) bBCE(w^)=m1i=1m1(y^(i)y(i))

根据定义, x ( i ) = [ x 1 ( i ) , x 2 ( i ) , . . . , x n ( i ) , 1 ] x^{(i)}=[x_1^{(i)}, x_2^{(i)}, ..., x_n^{(i)}, 1] x(i)=[x1(i),x2(i),...,xn(i),1] x n + 1 ( i ) = 1 x^{(i)}_{n+1}=1 xn+1(i)=1,1就是 x ( i ) x^{(i)} x(i)的第 n + 1 n+1 n+1个分量
∇ w ^ B C E ( w ^ ) = ∂ B C E ( w ^ ) ∂ w ^ = [ ∂ B C E ( w ^ ) ∂ w 1 ∂ B C E ( w ^ ) ∂ w 2 . . . ∂ B C E ( w ^ ) ∂ w n ∂ B C E ( w ^ ) ∂ b ] = [ 1 m ∑ i = 1 m ( x 1 ( i ) ( S i g m o i d ( x ^ ( i ) ⋅ w ^ ) − y ( i ) ) ) 1 m ∑ i = 1 m ( x 2 ( i ) ( S i g m o i d ( x ^ ( i ) ⋅ w ^ ) − y ( i ) ) ) . . . 1 m ∑ i = 1 m ( x n ( i ) ( S i g m o i d ( x ^ ( i ) ⋅ w ^ ) − y ( i ) ) ) 1 m ∑ i = 1 m ( 1 ⋅ ( S i g m o i d ( x ^ ( i ) ⋅ w ^ ) − y ( i ) ) ) ] = 1 m [ ∑ i = 1 m x 1 ( i ) S i g m o i d ( x ^ ( i ) ⋅ w ^ ) − ∑ i = 1 m x 1 ( i ) y ( i ) ∑ i = 1 m x 2 ( i ) S i g m o i d ( x ^ ( i ) ⋅ w ^ ) − ∑ i = 1 m x 2 ( i ) y ( i ) . . . ∑ i = 1 m x n ( i ) S i g m o i d ( x ^ ( i ) ⋅ w ^ ) − ∑ i = 1 m x n ( i ) y ( i ) ∑ i = 1 m S i g m o i d ( x ^ ( i ) ⋅ w ^ ) − ∑ i = 1 m y ( i ) ] \begin{aligned} \nabla _{\hat w} BCE(\hat w) &= \frac{\partial BCE(\hat w)}{\partial \hat w}\\ & =\left [\begin{array}{cccc} \frac{\partial BCE(\hat w)}{\partial w_1} \\ \frac{\partial BCE(\hat w)}{\partial w_2} \\ . \\ . \\ . \\ \frac{\partial BCE(\hat w)}{\partial w_n} \\ \frac{\partial BCE(\hat w)}{\partial b} \\ \end{array}\right] \\ & =\left [\begin{array}{cccc} \frac{1}{m}\sum^m_{i=1}(x_1^{(i)}(Sigmoid(\hat x^{(i)} \cdot \hat w)-y^{(i)})) \\ \frac{1}{m}\sum^m_{i=1}(x_2^{(i)}(Sigmoid(\hat x^{(i)} \cdot \hat w)-y^{(i)})) \\ . \\ . \\ . \\ \frac{1}{m}\sum^m_{i=1}(x_n^{(i)}(Sigmoid(\hat x^{(i)} \cdot \hat w)-y^{(i)})) \\ \frac{1}{m}\sum^m_{i=1}(1 \cdot (Sigmoid(\hat x^{(i)} \cdot \hat w)-y^{(i)})) \\ \end{array}\right] \\ &= \frac{1}{m} \left [\begin{array}{cccc} \sum^m_{i=1}x_1^{(i)}Sigmoid(\hat x^{(i)} \cdot \hat w)-\sum^m_{i=1}x_1^{(i)}y^{(i)} \\ \sum^m_{i=1}x_2^{(i)}Sigmoid(\hat x^{(i)} \cdot \hat w)-\sum^m_{i=1}x_2^{(i)}y^{(i)} \\ . \\ . \\ . \\ \sum^m_{i=1}x_n^{(i)}Sigmoid(\hat x^{(i)} \cdot \hat w)-\sum^m_{i=1}x_n^{(i)}y^{(i)} \\ \sum^m_{i=1}Sigmoid(\hat x^{(i)} \cdot \hat w)-\sum^m_{i=1}y^{(i)} \\ \end{array}\right]\\ \end{aligned} w^BCE(w^)=w^BCE(w^)=w1BCE(w^)w2BCE(w^)...wnBCE(w^)bBCE(w^)=m1i=1m(x1(i)(Sigmoid(x^(i)w^)y(i)))m1i=1m(x2(i)(Sigmoid(x^(i)w^)y(i)))...m1i=1m(xn(i)(Sigmoid(x^(i)w^)y(i)))m1i=1m(1(Sigmoid(x^(i)w^)y(i)))=m1i=1mx1(i)Sigmoid(x^(i)w^)i=1mx1(i)y(i)i=1mx2(i)Sigmoid(x^(i)w^)i=1mx2(i)y(i)...i=1mxn(i)Sigmoid(x^(i)w^)i=1mxn(i)y(i)i=1mSigmoid(x^(i)w^)i=1my(i)

X X X为添加最后一列全是1的特征矩阵, y y y是标签数组:
X ^ = [ x ( 1 ) , x ( 2 ) , . . . , x ( m ) ] T = [ x 1 ( 1 ) , x 2 ( 1 ) , . . . , x n ( 1 ) , 1 x 1 ( 2 ) , x 2 ( 2 ) , . . . , x n ( 2 ) , 1 . . . x 1 ( m ) , x 2 ( m ) , . . . , x n ( m ) , 1 ] m ∗ ( n + 1 ) \begin{aligned} \hat X &= [x^{(1)}, x^{(2)}, ..., x^{(m)}]^T\\ & =\left [\begin{array}{cccc} x^{(1)}_1, x^{(1)}_2, ... ,x^{(1)}_n, 1 \\ x^{(2)}_1, x^{(2)}_2, ... ,x^{(2)}_n, 1 \\ . \\ . \\ . \\ x^{(m)}_1, x^{(m)}_2, ... ,x^{(m)}_n, 1 \\ \end{array}\right]_{m*(n+1)} \\ \end{aligned} X^=[x(1),x(2),...,x(m)]T=x1(1),x2(1),...,xn(1),1x1(2),x2(2),...,xn(2),1...x1(m),x2(m),...,xn(m),1m(n+1)
y = [ y ( 1 ) , y ( 2 ) , . . . , y ( m ) ] 1 ∗ m T y = [y^{(1)}, y^{(2)}, ..., y^{(m)}]^T_{1*m} y=[y(1),y(2),...,y(m)]1mT

则上式前半部分可简化为:
[ ∑ i = 1 m x 1 ( i ) S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ∑ i = 1 m x 2 ( i ) S i g m o i d ( x ^ ( i ) ⋅ w ^ ) . . . ∑ i = 1 m x n ( i ) S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ∑ i = 1 m S i g m o i d ( x ^ ( i ) ⋅ w ^ ) ] = [ x 1 ( 1 ) , x 1 ( 2 ) , . . . , x 1 ( m ) x 2 ( 1 ) , x 2 ( 2 ) , . . . , x 2 ( m ) . . . x m ( 1 ) , x m ( 2 ) , . . . , x m ( m ) 1 , 1 , . . . , 1 , 1 ] ⋅ [ S i g m o i d ( x ^ ( 1 ) ⋅ w ^ ) S i g m o i d ( x ^ ( 2 ) ⋅ w ^ ) . . . S i g m o i d ( x ^ ( m ) ⋅ w ^ ) ] = [ x 1 ( 1 ) , x 2 ( 1 ) , . . . , x n ( 1 ) , 1 x 1 ( 2 ) , x 2 ( 2 ) , . . . , x n ( 2 ) , 1 . . . x 1 ( m ) , x 2 ( m ) , . . . , x n ( m ) , 1 ] T ⋅ S i g m o i d ( [ x 1 ( 1 ) , x 2 ( 1 ) , . . . , x n ( 1 ) , 1 x 1 ( 2 ) , x 2 ( 2 ) , . . . , x n ( 2 ) , 1 . . . x 1 ( m ) , x 2 ( m ) , . . . , x n ( m ) , 1 ] ⋅ [ w 1 w 2 . . . w n b ] ) = X ^ T S i g m o i d ( X ^ ⋅ w ^ ) \begin{aligned} \left [\begin{array}{cccc} \sum^m_{i=1}x_1^{(i)}Sigmoid(\hat x^{(i)} \cdot \hat w) \\ \sum^m_{i=1}x_2^{(i)}Sigmoid(\hat x^{(i)} \cdot \hat w) \\ . \\ . \\ . \\ \sum^m_{i=1}x_n^{(i)}Sigmoid(\hat x^{(i)} \cdot \hat w) \\ \sum^m_{i=1}Sigmoid(\hat x^{(i)} \cdot \hat w) \\ \end{array}\right] &=\left [\begin{array}{cccc} x^{(1)}_1, x^{(2)}_1, ... ,x^{(m)}_1 \\ x^{(1)}_2, x^{(2)}_2, ... ,x^{(m)}_2 \\ . \\ . \\ . \\ x^{(1)}_m, x^{(2)}_m, ... ,x^{(m)}_m \\ 1, 1, ... ,1, 1 \\ \end{array}\right] \cdot \left [\begin{array}{cccc} Sigmoid(\hat x^{(1)} \cdot \hat w) \\ Sigmoid(\hat x^{(2)} \cdot \hat w) \\ . \\ . \\ . \\ Sigmoid(\hat x^{(m)} \cdot \hat w) \\ \end{array}\right] \\ &=\left [\begin{array}{cccc} x^{(1)}_1, x^{(1)}_2, ... ,x^{(1)}_n, 1 \\ x^{(2)}_1, x^{(2)}_2, ... ,x^{(2)}_n, 1 \\ . \\ . \\ . \\ x^{(m)}_1, x^{(m)}_2, ... ,x^{(m)}_n, 1 \\ \end{array}\right]^T \cdot Sigmoid( \left [\begin{array}{cccc} x^{(1)}_1, x^{(1)}_2, ... ,x^{(1)}_n, 1 \\ x^{(2)}_1, x^{(2)}_2, ... ,x^{(2)}_n, 1 \\ . \\ . \\ . \\ x^{(m)}_1, x^{(m)}_2, ... ,x^{(m)}_n, 1 \\ \end{array}\right] \cdot \left [\begin{array}{cccc} w_1 \\ w_2 \\ . \\ . \\ . \\ w_n \\ b \\ \end{array}\right]) \\ &= \hat X^T Sigmoid(\hat X \cdot \hat w) \end{aligned} i=1mx1(i)Sigmoid(x^(i)w^)i=1mx2(i)Sigmoid(x^(i)w^)...i=1mxn(i)Sigmoid(x^(i)w^)i=1mSigmoid(x^(i)w^)=x1(1),x1(2),...,x1(m)x2(1),x2(2),...,x2(m)...xm(1),xm(2),...,xm(m)1,1,...,1,1Sigmoid(x^(1)w^)Sigmoid(x^(2)w^)...Sigmoid(x^(m)w^)=x1(1),x2(1),...,xn(1),1x1(2),x2(2),...,xn(2),1...x1(m),x2(m),...,xn(m),1TSigmoid(x1(1),x2(1),...,xn(1),1x1(2),x2(2),...,xn(2),1...x1(m),x2(m),...,xn(m),1w1w2...wnb)=X^TSigmoid(X^w^)

同时,后半部分可简化为:
[ ∑ i = 1 m x 1 ( i ) y ( i ) ∑ i = 1 m x 2 ( i ) y ( i ) . . . ∑ i = 1 m x n ( i ) y ( i ) ∑ i = 1 m y ( i ) ] = [ x 1 ( 1 ) , x 2 ( 1 ) , . . . , x n ( 1 ) , 1 x 1 ( 2 ) , x 2 ( 2 ) , . . . , x n ( 2 ) , 1 . . . x 1 ( m ) , x 2 ( m ) , . . . , x n ( m ) , 1 ] T ⋅ [ y ( 1 ) y ( 2 ) . . . y ( m ) ] = X ^ T ⋅ y \begin{aligned} \left [\begin{array}{cccc} \sum^m_{i=1}x_1^{(i)}y^{(i)} \\ \sum^m_{i=1}x_2^{(i)}y^{(i)} \\ . \\ . \\ . \\ \sum^m_{i=1}x_n^{(i)}y^{(i)} \\ \sum^m_{i=1}y^{(i)} \\ \end{array}\right] = \left [\begin{array}{cccc} x^{(1)}_1, x^{(1)}_2, ... ,x^{(1)}_n, 1 \\ x^{(2)}_1, x^{(2)}_2, ... ,x^{(2)}_n, 1 \\ . \\ . \\ . \\ x^{(m)}_1, x^{(m)}_2, ... ,x^{(m)}_n, 1 \\ \end{array}\right]^T \cdot \left [\begin{array}{cccc} y^{(1)} \\ y^{(2)} \\ . \\ . \\ . \\ y^{(m)} \\ \end{array}\right]=\hat X^T \cdot y \end{aligned} i=1mx1(i)y(i)i=1mx2(i)y(i)...i=1mxn(i)y(i)i=1my(i)=x1(1),x2(1),...,xn(1),1x1(2),x2(2),...,xn(2),1...x1(m),x2(m),...,xn(m),1Ty(1)y(2)...y(m)=X^Ty

因此,逻辑回归损失函数的梯度计算公式为:

∇ w ^ B C E ( w ^ ) = 1 m X ^ T ( S i g m o i d ( X ^ ⋅ w ^ ) − y ) \nabla _{\hat w} BCE(\hat w) =\frac{1}{m}\hat X^T(Sigmoid(\hat X \cdot \hat w) - y) w^BCE(w^)=m1X^T(Sigmoid(X^w^)y)

三、逻辑回归模型输出结果与模型可解释性

  从整体情况来看,逻辑回归在经过Sigmoid函数处理之后,是将线性方程输出结果压缩在了0-1之间,用该结果再来进行回归类的连续数值预测肯定是不合适的了。在实际模型应用过程中,逻辑回归主要应用于二分类问题的预测。

  对于逻辑回归输出的(0,1)之间的连续型数值,我们只需要确定一个“阈值”,就可以将其转化为二分类的类别判别结果。通常来说,这个阈值是0.5,即以0.5为界;

例如,假设逻辑回归方程如下:
l n y 1 − y = x 1 + 2 x 2 − 1 ln\frac{y}{1-y} = x_1+2x_2-1 ln1yy=x1+2x21
则可解读为 x 2 x_2 x2的重要性是 x 1 x_1 x1的两倍, x 2 x_2 x2每增加1的效果(令样本为1的概率的增加)是 x 1 x_1 x1增加1效果的两倍。

  • 逻辑回归输出结果(y)是否是概率:
      决定y是否是概率的核心因素,不是模型本身,而是建模流程。
      逻辑斯蒂本身也有对应的概率分布,因此输入的自变量其实是可以视作随机变量的,但前提是需要满足一定的分布要求。如果逻辑回归的建模流程遵照数理统计方法的一般建模流程,即自变量的分布(或者转化之后的分布)满足一定要求(通过检验),则最终模型输出结果就是严格意义上的概率取值。而如果是遵照机器学习建模流程进行建模,在为对自变量进行假设检验下进行模型构建,则由于自变量分布不一定满足条件,因此输出结果不一定为严格意义上的概率。
  • 0
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值