1. Jenson不等式
Jenson不等式定义:如果
f
(
x
)
f(x)
f(x)是凸函数,则有
E
[
f
(
x
)
]
≥
f
(
E
[
x
]
)
E[f(x)] \geq f(E[x])
E[f(x)]≥f(E[x])成立。
凸函数的充要条件:a,b在
f
(
x
)
f(x)
f(x)定义域上,
λ
f
(
a
)
+
(
1
−
λ
)
f
(
b
)
≥
f
(
λ
a
+
(
1
−
λ
)
b
)
\lambda f(a)+(1-\lambda) f(b)\geq f(\lambda a+ (1-\lambda)b)
λf(a)+(1−λ)f(b)≥f(λa+(1−λ)b)恒成立。
推论:若
f
(
x
)
f(x)
f(x)为凸函数,则
−
f
(
x
)
-f(x)
−f(x)为凹函数。所以凹函数的充要条件为a,b在
f
(
x
)
f(x)
f(x)定义域上,
λ
f
(
a
)
+
(
1
−
λ
)
f
(
b
)
≤
f
(
λ
a
+
(
1
−
λ
)
b
)
\lambda f(a)+(1-\lambda) f(b)\leq f(\lambda a+ (1-\lambda)b)
λf(a)+(1−λ)f(b)≤f(λa+(1−λ)b)恒成立。
2. Jenson不等式的证明
要证
E
[
f
(
x
)
]
≥
f
(
E
[
x
]
)
E[f(x)] \geq f(E[x])
E[f(x)]≥f(E[x]),即证
∑
i
=
1
k
p
i
f
(
x
i
)
≥
f
(
∑
i
=
1
k
p
i
x
i
)
\sum_{i=1}^{k}{p_if(x_i)}\geq f(\sum_{i=1}^{k}{p_ix_i})
∑i=1kpif(xi)≥f(∑i=1kpixi)。
利用数学归纳法证明,在
k
=
1
,
2
k=1,2
k=1,2时,上式成立,假设在
k
=
n
k=n
k=n时,上式也成立,那么在
k
=
n
+
1
k=n+1
k=n+1时:
∑
i
=
1
k
+
1
p
i
f
(
x
i
)
=
p
k
+
1
f
(
x
k
+
1
)
+
∑
i
=
1
k
p
i
f
(
x
i
)
\sum_{i=1}^{k+1}{p_if(x_i)}=p_{k+1}f(x_{k+1})+\sum_{i=1}^{k}{p_if(x_i)}
∑i=1k+1pif(xi)=pk+1f(xk+1)+∑i=1kpif(xi)
=
p
k
+
1
f
(
x
k
+
1
)
+
z
k
∑
i
=
1
k
p
i
z
k
f
(
x
i
)
,
z
k
=
∑
i
=
1
k
p
i
=p_{k+1}f(x_{k+1})+z_k\sum_{i=1}^{k}{\frac{p_i}{z_k}f(x_i)}, z_k=\sum_{i=1}^kp_i
=pk+1f(xk+1)+zk∑i=1kzkpif(xi),zk=∑i=1kpi
≥
p
k
+
1
f
(
x
k
+
1
)
+
z
k
f
(
∑
i
=
1
k
p
i
z
k
x
i
)
,
∑
i
=
1
k
p
i
z
k
=
1
\geq p_{k+1}f(x_{k+1})+z_kf(\sum_{i=1}^{k}{\frac{p_i}{z_k}x_i}), \sum_{i=1}^{k}{\frac{p_i}{z_k}}=1
≥pk+1f(xk+1)+zkf(∑i=1kzkpixi),∑i=1kzkpi=1
≥
f
(
p
k
+
1
x
k
+
1
+
z
k
∑
i
=
1
k
p
i
z
k
x
i
)
,
z
k
+
p
k
+
1
=
1
\geq f(p_{k+1}x_{k+1}+z_k\sum_{i=1}^k{\frac{p_i}{z_k}x_i}),z_k+p_{k+1}=1
≥f(pk+1xk+1+zk∑i=1kzkpixi),zk+pk+1=1
≥
f
(
∑
i
=
1
k
+
1
p
i
x
i
)
\geq f(\sum_{i=1}^{k+1}{p_ix_i})
≥f(∑i=1k+1pixi)
f
(
x
)
f(x)
f(x)定义域为D,上述证明同样可证在
g
(
x
)
∈
D
g(x)∈D
g(x)∈D时,
∑
i
=
1
k
p
i
f
(
g
(
x
i
)
)
≥
f
(
∑
i
=
1
k
p
i
g
(
x
i
)
)
\sum_{i=1}^{k}{p_if(g(x_i))}\geq f(\sum_{i=1}^{k}{p_ig(x_i)})
∑i=1kpif(g(xi))≥f(∑i=1kpig(xi))。
3. EM估计及Jenson不等式在其中的作用
4. KL散度及Jenson不等式在其中的作用
KL散度:衡量
p
(
x
)
,
q
(
x
)
p(x),q(x)
p(x),q(x)两个分布的相似性,
K
L
(
p
,
q
)
=
E
p
(
x
)
[
l
o
g
(
p
(
x
i
)
q
(
x
i
)
)
]
=
∑
i
=
1
n
p
(
x
i
)
l
o
g
(
p
(
x
i
)
q
(
x
i
)
)
KL(p,q)=E_{p(x)}[log(\frac{p(x_i)}{q(x_i)})]=\sum_{i=1}^np(x_i)log(\frac{p(x_i)}{q(x_i)})
KL(p,q)=Ep(x)[log(q(xi)p(xi))]=∑i=1np(xi)log(q(xi)p(xi))。
利用Jenson不等式证明KL散度恒大于0证明:
K
L
(
p
,
q
)
=
∑
i
=
1
n
p
(
x
i
)
l
o
g
(
p
(
x
i
)
q
(
x
i
)
)
=
−
∑
i
=
1
n
p
(
x
i
)
l
o
g
(
q
(
x
i
)
p
(
x
i
)
)
KL(p,q)=\sum_{i=1}^np(x_i)log(\frac{p(x_i)}{q(x_i)})=-\sum_{i=1}^np(x_i)log(\frac{q(x_i)}{p(x_i)})
KL(p,q)=∑i=1np(xi)log(q(xi)p(xi))=−∑i=1np(xi)log(p(xi)q(xi)),
f
(
x
)
=
−
l
o
g
(
x
)
f(x)=-log(x)
f(x)=−log(x)为凸函数,
g
(
x
)
=
q
(
x
)
p
(
x
)
g(x)=\frac{q(x)}{p(x)}
g(x)=p(x)q(x)。
5. KL散度与交叉熵损失函数
5.1 信息熵
香农信息量:信号
x
x
x发生的概率为
p
(
x
)
p(x)
p(x),则
x
x
x带有的信息量为
l
o
g
(
1
p
(
x
)
)
log(\frac{1}{p(x)})
log(p(x)1),又称为编码x需要的长度。
信息量的期望,也称信息熵/平均编码长度:
E
[
l
o
g
(
1
p
(
x
)
)
]
=
∑
i
=
1
n
p
(
x
i
)
l
o
g
(
1
p
(
x
i
)
)
E[log(\frac{1}{p(x)})]=\sum_{i=1}^np(x_i)log(\frac{1}{p(x_i)})
E[log(p(x)1)]=∑i=1np(xi)log(p(xi)1)。
5.2 相对熵(KL散度)
相对熵,也叫KL散度,信息论中表示用分布
q
(
x
)
q(x)
q(x)编码实际分布为
p
(
x
)
p(x)
p(x)的信息
x
x
x比用分布
p
(
x
)
p(x)
p(x)编码信息
x
x
x额外多出的平均编码长度,统计学上表示两个分布的差异大小,相对熵表示为:
E
p
(
x
)
[
l
o
g
(
p
(
x
)
q
(
x
)
)
]
=
∑
i
=
1
n
p
(
x
i
)
l
o
g
(
p
(
x
i
)
q
(
x
i
)
)
=
∑
i
=
1
n
p
(
x
i
)
l
o
g
(
1
q
(
x
i
)
)
−
∑
i
=
1
n
p
(
x
i
)
l
o
g
(
1
p
(
x
i
)
)
E_{p(x)}[log(\frac{p(x)}{q(x)})]=\sum_{i=1}^np(x_i)log(\frac{p(x_i)}{q(x_i)})=\sum_{i=1}^np(x_i)log(\frac{1}{q(x_i)})-\sum_{i=1}^np(x_i)log(\frac{1}{p(x_i)})
Ep(x)[log(q(x)p(x))]=∑i=1np(xi)log(q(xi)p(xi))=∑i=1np(xi)log(q(xi)1)−∑i=1np(xi)log(p(xi)1)
5.3 交叉熵
交叉熵,表示用分布为
q
(
x
)
q(x)
q(x)的编码来编码实际分布为
p
(
x
)
p(x)
p(x)的信息的信息熵/平均编码长度,表示为:
E
p
(
x
)
[
l
o
g
(
1
q
(
x
)
)
]
=
∑
i
=
1
n
p
(
x
i
)
l
o
g
(
1
q
(
x
i
)
)
E_{p(x)}[log(\frac{1}{q(x)})]=\sum_{i=1}^np(x_i)log(\frac{1}{q(x_i)})
Ep(x)[log(q(x)1)]=∑i=1np(xi)log(q(xi)1),
根据5.2可知交叉熵=信息熵+相对熵,在信息x的分布确定的情况下,信息熵
E
p
(
x
)
[
l
o
g
(
1
p
(
x
)
)
]
E_{p(x)}[log(\frac{1}{p(x)})]
Ep(x)[log(p(x)1)]为定值,交叉熵与相对熵等效。
5.4 图形分类任务及交叉熵损失函数
图像分类任务中,训练集中图像为
x
x
x,图像类别的分布为
p
(
x
)
p(x)
p(x),模型对于输入图像
x
x
x的类别预测分布为
q
(
x
)
q(x)
q(x),学习目标为两个分布尽量接近,即相对熵最小,由于训练集每个样本的类别是确定的,所以训练集样本的信息熵是定值,求相对熵最小也就是求交叉熵最小,因此以交叉熵为损失函数。
交叉熵损失函数表示为:
l
o
s
s
=
∑
k
=
1
N
∑
i
=
1
n
p
(
x
k
i
)
l
o
g
(
1
q
(
x
k
i
)
)
loss=\sum_{k=1}^N\sum_{i=1}^np(x_{ki})log(\frac{1}{q(x_{ki})})
loss=∑k=1N∑i=1np(xki)log(q(xki)1),
q
(
x
k
i
)
q(x_{ki})
q(xki)表示模型预测第
k
k
k个样本属于第
i
i
i个类别的概率。
假设训练集中第k个样本属于第c类,则
∑
i
≠
c
p
(
x
k
i
)
=
0
,
p
(
x
k
c
=
1
)
\sum_{i\neq c}p(x_{ki})=0,p(x_{kc}=1)
∑i̸=cp(xki)=0,p(xkc=1),所以上述损失函数可化简为:
l
o
s
s
=
−
∑
k
=
1
N
l
o
g
(
q
(
x
k
c
)
)
loss=-\sum_{k=1}^Nlog(q(x_{kc}))
loss=−∑k=1Nlog(q(xkc))。