逻辑回归和最大熵模型本质上没有区别,最大熵在解决二分类问题时就是逻辑回归,在解决多分类问题时就是多项逻辑回归。
逻辑回归
Logistic 回归的本质是:假设数据服从这个分布,然后使用极大似然估计做参数的估计。
Sigmod函数
Regression问题的常规步骤为:
- 寻找h函数(即hypothesis);
- 构造J函数(损失函数);
- 想办法使得J函数最小并求得回归参数(θ)
推导公式一看就懂
对比多家,这个是唯一一个,一看就懂的(每一步推导都很清晰)
- 线性回归模型的模型如下:
y = θ 0 + θ 1 x 1 + θ 2 x 2 + ⋯ + θ n x n h θ ( x ) = θ T x y=\theta_0+\theta_1x_1+\theta_2x_2+\cdots+\theta_nx_n \\ h_\theta(x)=\theta^Tx y=θ0+θ1x1+θ2x2+⋯+θnxnhθ(x)=θTx - Sigmoid函数:
g ( z ) = 1 1 + e − z g(z)=\frac{1}{1+e^{-z}} g(z)=1+e−z1 - 将线性回归模型带入
g
(
z
)
g(z)
g(z)中,得到最终的逻辑回归模型:
h θ ( x ) = g ( θ T x ) = 1 1 + e − θ T x h_\theta(x)=g(\theta^Tx)=\frac{1}{1+e^{-\theta^Tx}} hθ(x)=g(θTx)=1+e−θTx1 - 假定上个表达式是等于类1的概率,那么类0的概率等于1减去类1的概率:
{ P ( c = 1 ∣ x ; θ ) = h θ ( x ) P ( c = 0 ∣ x ; θ ) = 1 − h θ ( x ) \left\{\begin{array}{l} P(c=1 \mid x ; \theta)=h_{\theta}(x) \\ P(c=0 \mid x ; \theta)=1-h_{\theta}(x) \end{array}\right. {P(c=1∣x;θ)=hθ(x)P(c=0∣x;θ)=1−hθ(x)
$$
\left\{\begin{array}{l}
P(c=1 \mid x ; \theta)=h_{\theta}(x) \\
P(c=0 \mid x ; \theta)=1-h_{\theta}(x)
\end{array}\right.
$$
整合到一个公式:
P
(
c
=
y
∣
x
;
θ
)
=
P
(
y
∣
x
;
θ
)
=
(
h
θ
(
x
)
)
y
(
1
−
h
θ
(
x
)
)
1
−
y
P(c=y|x;\theta)=P(y|x;\theta)=(h_\theta(x))^y(1-h_\theta(x))^{1-y}
P(c=y∣x;θ)=P(y∣x;θ)=(hθ(x))y(1−hθ(x))1−y
5. 那么似然函数为:
L
(
θ
)
=
∏
i
=
1
m
(
h
θ
(
x
)
)
y
(
1
−
h
θ
(
x
)
)
1
−
y
L(\theta)=\prod_{i=1}^{m}\left(h_{\theta}(x)\right)^{y}\left(1-h_{\theta}(x)\right)^{1-y}
L(θ)=i=1∏m(hθ(x))y(1−hθ(x))1−y
$$
L(\theta)=\prod_{i=1}^{m}\left(h_{\theta}(x)\right)^{y}\left(1-h_{\theta}(x)\right)^{1-y}
$$
- 取对数得:
log ( L ( θ ) ) = ∑ i = 1 m [ y ( i ) log h θ ( x ( i ) ) + ( 1 − y ( i ) ) log ( 1 − h θ ( x ( i ) ) ) ] \log (L(\theta))=\sum_{i=1}^{m}\left[y^{(i)} \log h_{\theta}\left(x^{(i)}\right)+\left(1-y^{(i)}\right) \log \left(1-h_{\theta}\left(x^{(i)}\right)\right)\right] log(L(θ))=i=1∑m[y(i)loghθ(x(i))+(1−y(i))log(1−hθ(x(i)))]
-
求上式得极大值,引入因子 -1/m,转化为求下式得极小值:
这就是逻辑回归的log损失函数
J ( θ ) = − 1 m ∑ i = 1 m [ y ( i ) log h θ ( x ( i ) ) + ( 1 − y ( i ) ) log ( 1 − h θ ( x ( i ) ) ) ] J(\theta)=-\frac{1}{m} \sum_{i=1}^{m}\left[y^{(i)} \log h_{\theta}\left(x^{(i)}\right)+\left(1-y^{(i)}\right) \log \left(1-h_{\theta}\left(x^{(i)}\right)\right)\right] J(θ)=−m1i=1∑m[y(i)loghθ(x(i))+(1−y(i))log(1−hθ(x(i)))] -
求损失函数的偏导:
∂ J ( θ ) ∂ θ J = − 1 m ∑ i = 1 m ( y ( i ) 1 h θ ( x ( i ) ) ∂ h θ ( x i ) ∂ θ j − ( 1 − y ( i ) ) 1 1 − h θ ( x ( i ) ) ∂ h θ ( x i ) ∂ θ j ) = − 1 m ∑ i = 1 m ( y ( i ) 1 g ( θ T x ( i ) ) − ( 1 − y ( i ) ) 1 1 − g ( θ T x ( i ) ) ) ⋅ ∂ g ( θ T x ( i ) ) ∂ θ j = − 1 m ∑ i = 1 m ( y ( i ) 1 g ( θ T x ( i ) ) − ( 1 − y ( i ) ) 1 1 − g ( θ T x ( i ) ) ) ⋅ g ( θ T x ( i ) ) ( 1 − g ( θ T x ( i ) ) x j ( i ) = − 1 m ∑ i = 1 m ( y ( i ) ( 1 − g ( θ T x ( i ) ) − ( 1 − y ( i ) ) g ( θ T x ( i ) ) ) ⋅ x j ( i ) = − 1 m ∑ i = 1 m ( y ( i ) − g ( θ T x ( i ) ) ) ⋅ x j ( i ) = 1 m ∑ i = 1 m ( h θ ( x ( i ) ) − y ( i ) ) ⋅ x j ( i ) \begin{aligned} \frac{\partial J(\theta)}{\partial \theta_{J}}&=-\frac{1}{m} \sum_{i=1}^{m}\left(y^{(i)} \frac{1}{h_{\theta}\left(x^{(i)}\right)} \frac{\partial h_{\theta}\left(x^{i}\right)}{\partial \theta_{j}}-\left(1-y^{(i)}\right) \frac{1}{1-h_{\theta}\left(x^{(i)}\right)} \frac{\partial h_{\theta}\left(x^{i}\right)}{\partial \theta_{j}}\right)\\ &=-\frac{1}{m} \sum_{i=1}^{m}\left(y^{(i)} \frac{1}{g\left(\theta^{T} x^{(i)}\right)}-\left(1-y^{(i)}\right) \frac{1}{1-g\left(\theta^{T} x^{(i)}\right)}\right) \cdot \frac{\partial g\left(\theta^{T} x^{(i)}\right)}{\partial \theta_{j}}\\ &=-\frac{1}{m} \sum_{i=1}^{m}\left(y^{(i)} \frac{1}{g\left(\theta^{T} x^{(i)}\right)}-\left(1-y^{(i)}\right) \frac{1}{1-g\left(\theta^{T} x^{(i)}\right)}\right) \cdot g\left(\theta^{T} x^{(i)}\right)\left(1-g\left(\theta^{T} x^{(i)}\right) x_{j}^{(i)}\right.\\ &=-\frac{1}{m} \sum_{i=1}^{m}\left(y^{(i)}\left(1-g\left(\theta^{T} x^{(i)}\right)-\left(1-y^{(i)}\right) g\left(\theta^{T} x^{(i)}\right)\right) \cdot x_{j}^{(i)}\right.\\ &=-\frac{1}{m} \sum_{i=1}^{m}\left(y^{(i)}-g(\theta^{T}x^{(i)})\right) \cdot x_j^{(i)}\\ &=\frac{1}{m} \sum_{i=1}^{m}\left(h_\theta(x^{(i)})-y^{(i)}\right)\cdot x_j^{(i)} \end{aligned} ∂θJ∂J(θ)=−m1i=1∑m(y(i)hθ(x(i))1∂θj∂hθ(xi)−(1−y(i))1−hθ(x(i))1∂θj∂hθ(xi))=−m1i=1∑m(y(i)g(θTx(i))1−(1−y(i))1−g(θTx(i))1)⋅∂θj∂g(θTx(i))=−m1i=1∑m(y(i)g(θTx(i))1−(1−y(i))1−g(θTx(i))1)⋅g(θTx(i))(1−g(θTx(i))xj(i)=−m1i=1∑m(y(i)(1−g(θTx(i))−(1−y(i))g(θTx(i)))⋅xj(i)=−m1i=1∑m(y(i)−g(θTx(i)))⋅xj(i)=m1i=1∑m(hθ(x(i))−y(i))⋅xj(i)
$$
\begin{aligned}
\frac{\partial J(\theta)}{\partial \theta_{J}}&=-\frac{1}{m} \sum_{i=1}^{m}\left(y^{(i)} \frac{1}{h_{\theta}\left(x^{(i)}\right)} \frac{\partial h_{\theta}\left(x^{i}\right)}{\partial \theta_{j}}-\left(1-y^{(i)}\right) \frac{1}{1-h_{\theta}\left(x^{(i)}\right)} \frac{\partial h_{\theta}\left(x^{i}\right)}{\partial \theta_{j}}\right)\\
&=-\frac{1}{m} \sum_{i=1}^{m}\left(y^{(i)} \frac{1}{g\left(\theta^{T} x^{(i)}\right)}-\left(1-y^{(i)}\right) \frac{1}{1-g\left(\theta^{T} x^{(i)}\right)}\right) \cdot \frac{\partial g\left(\theta^{T} x^{(i)}\right)}{\partial \theta_{j}}\\
&=-\frac{1}{m} \sum_{i=1}^{m}\left(y^{(i)} \frac{1}{g\left(\theta^{T} x^{(i)}\right)}-\left(1-y^{(i)}\right) \frac{1}{1-g\left(\theta^{T} x^{(i)}\right)}\right) \cdot g\left(\theta^{T} x^{(i)}\right)\left(1-g\left(\theta^{T} x^{(i)}\right) x_{j}^{(i)}\right.\\
&=-\frac{1}{m} \sum_{i=1}^{m}\left(y^{(i)}\left(1-g\left(\theta^{T} x^{(i)}\right)-\left(1-y^{(i)}\right) g\left(\theta^{T} x^{(i)}\right)\right) \cdot x_{j}^{(i)}\right.\\
&=-\frac{1}{m} \sum_{i=1}^{m}\left(y^{(i)}-g(\theta^{T}x^{(i)})\right) \cdot x_j^{(i)}\\
&=\frac{1}{m} \sum_{i=1}^{m}\left(h_\theta(x^{(i)})-y^{(i)}\right)\cdot x_j^{(i)}
\end{aligned}
$$
上面公式推导需要带入下面公式:
h
θ
(
x
)
=
g
(
θ
T
x
)
=
1
1
+
e
−
θ
T
x
h_\theta(x)=g(\theta^Tx)=\frac{1}{1+e^{-\theta^Tx}}
hθ(x)=g(θTx)=1+e−θTx1
同时需要用到sigmoid函数的导数公式:
g
(
x
)
=
1
1
+
e
−
x
g
′
(
x
)
=
g
(
x
)
(
1
−
g
(
x
)
)
g(x)=\frac{1}{1+e^{-x}} \\ g^{\prime}(x)=g(x)(1-g(x))
g(x)=1+e−x1g′(x)=g(x)(1−g(x))
9. 最后根据损失函数更新 theta:
θ
j
=
θ
j
−
α
⋅
∂
∂
J
(
θ
)
,
(
j
=
0
,
1
,
⋯
,
n
)
\theta_{j}=\theta_{j}-\alpha \cdot \frac{\partial}{\partial} J(\theta),(j=0,1, \cdots, n)
θj=θj−α⋅∂∂J(θ),(j=0,1,⋯,n)
代码
from numpy import *
filename='...\\testSet.txt' #文件目录
def loadDataSet(): #读取数据(这里只有两个特征)
dataMat = []
labelMat = []
fr = open(filename)
for line in fr.readlines():
lineArr = line.strip().split()
dataMat.append([1.0, float(lineArr[0]), float(lineArr[1])]) #前面的1,表示方程的常量。比如两个特征X1,X2,共需要三个参数,W1+W2*X1+W3*X2
labelMat.append(int(lineArr[2]))
return dataMat,labelMat
#sigmoid函数
def sigmoid(inX):
return 1.0/(1+exp(-inX))
#梯度上升求最优参数
def gradAscent(dataMat, labelMat):
dataMatrix=mat(dataMat) #将读取的数据转换为矩阵
classLabels=mat(labelMat).transpose() #将读取的数据转换为矩阵
m,n = shape(dataMatrix)
alpha = 0.001 #设置梯度的阀值,该值越大梯度上升幅度越大
maxCycles = 500 #设置迭代的次数,一般看实际数据进行设定,有些可能200次就够了
weights = ones((n,1)) #设置初始的参数,并都赋默认值为1。注意这里权重以矩阵形式表示三个参数。
for k in range(maxCycles):
h = sigmoid(dataMatrix*weights)
error = (classLabels - h) #求导后差值
weights = weights + alpha * dataMatrix.transpose()* error #迭代更新权重
return weights
#随机梯度上升,当数据量比较大时,每次迭代都选择全量数据进行计算,计算量会非常大。
#所以采用每次迭代中一次只选择其中的一行数据进行更新权重。
def stocGradAscent0(dataMat, labelMat):
dataMatrix=mat(dataMat)
classLabels=labelMat
m,n=shape(dataMatrix)
alpha=0.01
maxCycles = 500
weights=ones((n,1))
for k in range(maxCycles):
for i in range(m): #遍历计算每一行
h = sigmoid(sum(dataMatrix[i] * weights))
error = classLabels[i] - h
weights = weights + alpha * error * dataMatrix[i].transpose()
return weights
#改进版随机梯度上升,在每次迭代中随机选择样本来更新权重,
#并且随迭代次数增加,权重变化越小。
def stocGradAscent1(dataMat, labelMat):
dataMatrix=mat(dataMat)
classLabels=labelMat
m,n=shape(dataMatrix)
weights=ones((n,1))
maxCycles=500
for j in range(maxCycles): #迭代
dataIndex=[i for i in range(m)]
for i in range(m): #随机遍历每一行
alpha=4/(1+j+i)+0.0001 #随迭代次数增加,权重变化越小。
randIndex=int(random.uniform(0,len(dataIndex))) #随机抽样
h=sigmoid(sum(dataMatrix[randIndex]*weights))
error=classLabels[randIndex]-h
weights=weights+alpha*error*dataMatrix[randIndex].transpose()
del(dataIndex[randIndex]) #去除已经抽取的样本
return weights
def plotBestFit(weights): #画出最终分类的图
import matplotlib.pyplot as plt
dataMat,labelMat=loadDataSet()
dataArr = array(dataMat)
n = shape(dataArr)[0]
xcord1 = []; ycord1 = []
xcord2 = []; ycord2 = []
for i in range(n):
if int(labelMat[i])== 1:
xcord1.append(dataArr[i,1])
ycord1.append(dataArr[i,2])
else:
xcord2.append(dataArr[i,1])
ycord2.append(dataArr[i,2])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(xcord1, ycord1, s=30, c='red', marker='s')
ax.scatter(xcord2, ycord2, s=30, c='green')
x = arange(-3.0, 3.0, 0.1)
y = (-weights[0]-weights[1]*x)/weights[2]
ax.plot(x, y)
plt.xlabel('X1')
plt.ylabel('X2')
plt.show()
def main():
dataMat, labelMat = loadDataSet()
weights=gradAscent(dataMat, labelMat).getA()
plotBestFit(weights)
if __name__=='__main__':
main()
梯度上升
问: 有人会好奇为什么有些书籍上说的是梯度下降法(Gradient Decent)?
答: 其实这个两个方法在此情况下本质上是相同的。关键在于代价函数(cost function)或者叫目标函数(objective function)。如果目标函数是损失函数,那就是最小化损失函数来求函数的最小值,就用梯度下降。 如果目标函数是似然函数(Likelihood function),就是要最大化似然函数来求函数的最大值,那就用梯度上升。在逻辑回归中, 损失函数和似然函数无非就是互为正负关系。
只需要在迭代公式中的加法变成减法
数据集testSet.txt
-0.017612 14.053064 0
-1.395634 4.662541 1
-0.752157 6.538620 0
-1.322371 7.152853 0
0.423363 11.054677 0
0.406704 7.067335 1
0.667394 12.741452 0
-2.460150 6.866805 1
0.569411 9.548755 0
-0.026632 10.427743 0
0.850433 6.920334 1
1.347183 13.175500 0
1.176813 3.167020 1
-1.781871 9.097953 0
-0.566606 5.749003 1
0.931635 1.589505 1
-0.024205 6.151823 1
-0.036453 2.690988 1
-0.196949 0.444165 1
1.014459 5.754399 1
1.985298 3.230619 1
-1.693453 -0.557540 1
-0.576525 11.778922 0
-0.346811 -1.678730 1
-2.124484 2.672471 1
1.217916 9.597015 0
-0.733928 9.098687 0
-3.642001 -1.618087 1
0.315985 3.523953 1
1.416614 9.619232 0
-0.386323 3.989286 1
0.556921 8.294984 1
1.224863 11.587360 0
-1.347803 -2.406051 1
1.196604 4.951851 1
0.275221 9.543647 0
0.470575 9.332488 0
-1.889567 9.542662 0
-1.527893 12.150579 0
-1.185247 11.309318 0
-0.445678 3.297303 1
1.042222 6.105155 1
-0.618787 10.320986 0
1.152083 0.548467 1
0.828534 2.676045 1
-1.237728 10.549033 0
-0.683565 -2.166125 1
0.229456 5.921938 1
-0.959885 11.555336 0
0.492911 10.993324 0
0.184992 8.721488 0
-0.355715 10.325976 0
-0.397822 8.058397 0
0.824839 13.730343 0
1.507278 5.027866 1
0.099671 6.835839 1
-0.344008 10.717485 0
1.785928 7.718645 1
-0.918801 11.560217 0
-0.364009 4.747300 1
-0.841722 4.119083 1
0.490426 1.960539 1
-0.007194 9.075792 0
0.356107 12.447863 0
0.342578 12.281162 0
-0.810823 -1.466018 1
2.530777 6.476801 1
1.296683 11.607559 0
0.475487 12.040035 0
-0.783277 11.009725 0
0.074798 11.023650 0
-1.337472 0.468339 1
-0.102781 13.763651 0
-0.147324 2.874846 1
0.518389 9.887035 0
1.015399 7.571882 0
-1.658086 -0.027255 1
1.319944 2.171228 1
2.056216 5.019981 1
-0.851633 4.375691 1
-1.510047 6.061992 0
-1.076637 -3.181888 1
1.821096 10.283990 0
3.010150 8.401766 1
-1.099458 1.688274 1
-0.834872 -1.733869 1
-0.846637 3.849075 1
1.400102 12.628781 0
1.752842 5.468166 1
0.078557 0.059736 1
0.089392 -0.715300 1
1.825662 12.693808 0
0.197445 9.744638 0
0.126117 0.922311 1
-0.679797 1.220530 1
0.677983 2.556666 1
0.761349 10.693862 0
-2.168791 0.143632 1
1.388610 9.341997 0
0.317029 14.739025 0