OMLSA&IMCRA学习笔记
前言-MMSE-LSA
这两个搭配好像这个是经典算法的集大成者。首先回忆一下MMSE-LSA的经典公式如下
A
k
^
=
ξ
k
1
+
ξ
k
e
x
p
{
1
2
∫
v
k
∞
e
−
t
t
d
t
}
R
k
\begin{aligned} \hat{A_k}&=\frac{{\xi_k}}{1+\xi_k}exp \Big \{ \frac{1}{2}\int_{v_k}^{\infty}\frac{e^{-t}}{t} {\rm d}t\Big \}R_k \end{aligned}
Ak^=1+ξkξkexp{21∫vk∞te−tdt}Rk
OMLSA
而所谓的最优改进(Optimal Modified)其实增加了话音不确定性分析,先果后因的看一下OMLSA的公式:
G
(
k
,
l
)
=
{
G
H
1
(
k
,
l
)
}
p
(
k
,
l
)
×
G
m
i
n
1
−
p
(
k
,
l
)
G(k,l)=\{G_{H_1}(k,l)\}^{p(k,l)}\times G_{min}^{1-p(k,l)}
G(k,l)={GH1(k,l)}p(k,l)×Gmin1−p(k,l)
OMLSA公式推导
上式中的
G
H
1
(
k
,
l
)
=
ξ
(
k
,
l
)
1
+
ξ
(
k
,
l
)
e
x
p
{
1
2
∫
v
(
k
,
l
)
∞
e
−
t
t
d
t
}
G_{H_1}(k,l)=\frac{{\xi(k,l)}}{1+\xi(k,l)}exp \Big \{ \frac{1}{2}\int_{v(k,l)}^{\infty}\frac{e^{-t}}{t} {\rm d}t\Big \}
GH1(k,l)=1+ξ(k,l)ξ(k,l)exp{21∫v(k,l)∞te−tdt},很显然这个增益的算法比LSA复杂多了,而语音出现概率
p
(
k
,
l
)
p(k,l)
p(k,l)变成了指数运算。这一结果的来源是cohen提出了
H
0
(
k
,
l
)
H_0(k,l)
H0(k,l)和
H
1
(
k
,
l
)
H_1(k,l)
H1(k,l)作为语音缺席和语音出现两种假设。另外假设STFT的bin是复高斯(多元?)分布,所以被观测信号的条件概率密度被定义如下:
p
(
Y
(
k
,
l
)
∣
H
0
(
k
,
l
)
)
=
1
π
λ
d
(
k
,
l
)
e
x
p
{
−
∣
Y
(
k
,
l
)
∣
2
λ
d
(
k
,
l
)
}
p
(
Y
(
k
,
l
)
∣
H
1
(
k
,
l
)
)
=
1
π
(
λ
x
(
k
,
l
)
+
π
(
λ
d
(
k
,
l
)
)
e
x
p
{
−
∣
Y
(
k
,
l
)
∣
2
π
(
λ
x
(
k
,
l
)
+
λ
d
(
k
,
l
)
}
\begin{aligned} p(Y(k,l)|H_0(k,l))&=\frac{1}{\pi\lambda_d(k,l)}exp \Big \{ -\frac{|Y(k,l)|^2}{\lambda_d(k,l)}\Big \}\\ p(Y(k,l)|H_1(k,l))&=\frac{1}{\pi(\lambda_x(k,l)+\pi(\lambda_d(k,l))}exp \Big \{ -\frac{|Y(k,l)|^2}{\pi(\lambda_x(k,l)+\lambda_d(k,l)}\Big \} \end{aligned}
p(Y(k,l)∣H0(k,l))p(Y(k,l)∣H1(k,l))=πλd(k,l)1exp{−λd(k,l)∣Y(k,l)∣2}=π(λx(k,l)+π(λd(k,l))1exp{−π(λx(k,l)+λd(k,l)∣Y(k,l)∣2}
k
k
k表示频域的分量,
l
l
l表示帧的编号。
λ
x
(
k
,
l
)
=
E
[
∣
X
(
k
,
l
)
∣
2
∣
H
1
(
k
,
l
)
]
\lambda_x(k,l)=\mathbf{E}[|X(k,l)|^2|H_1(k,l)]
λx(k,l)=E[∣X(k,l)∣2∣H1(k,l)],
λ
d
(
k
,
l
)
=
E
[
∣
D
(
k
,
l
)
∣
2
]
\lambda_d(k,l)=\mathbf{E}[|D(k,l)|^2]
λd(k,l)=E[∣D(k,l)∣2]在此基础上,利用bayes规则,“以条件语音出现概率作为每一帧的语音出现概率”,这句话很重要,它表明每一帧的语音出现概率还是有据可循不是瞎猜的。定义的公式如下:
p
(
k
,
l
)
≜
P
(
H
1
(
k
,
l
)
∣
Y
(
k
,
l
)
)
p(k,l) \triangleq P(H_1(k,l)|Y(k,l))
p(k,l)≜P(H1(k,l)∣Y(k,l))
基于统计独立的假设,cohen定义了LSA估计器如下的形式:
A
^
(
k
,
l
)
=
e
x
p
{
E
[
l
o
g
A
(
k
,
l
)
∣
Y
(
k
,
l
)
]
}
≜
G
(
k
,
l
)
∣
Y
(
k
,
l
)
∣
\hat{A}(k,l)=exp\big\{\mathbf{E}[logA(k,l)|Y(k,l)] \big\}\triangleq G(k,l)|Y(k,l)|
A^(k,l)=exp{E[logA(k,l)∣Y(k,l)]}≜G(k,l)∣Y(k,l)∣
在没有话音的时候,增益被限定到一个
G
m
i
n
G_{min}
Gmin,得到
e
x
p
{
E
[
l
o
g
A
(
k
,
l
)
∣
Y
(
k
,
l
)
,
H
0
(
k
,
l
)
]
(
1
−
p
(
k
,
l
)
)
}
=
G
m
i
n
∣
Y
(
k
,
l
)
∣
exp\big\{\mathbf{E}[logA(k,l)|Y(k,l),H_0(k,l)](1-p(k,l))\big\}=G_{min}|Y(k,l)|
exp{E[logA(k,l)∣Y(k,l),H0(k,l)](1−p(k,l))}=Gmin∣Y(k,l)∣
而在话音出现的时候,利用LSA的增益公式得到
e
x
p
{
E
[
l
o
g
A
(
k
,
l
)
∣
Y
(
k
,
l
)
,
H
1
(
k
,
l
)
]
(
p
(
k
,
l
)
)
}
=
G
H
1
∣
Y
(
k
,
l
)
∣
exp\big\{\mathbf{E}[logA(k,l)|Y(k,l),H_1(k,l)](p(k,l))\big\}=G_{H_1}|Y(k,l)|
exp{E[logA(k,l)∣Y(k,l),H1(k,l)](p(k,l))}=GH1∣Y(k,l)∣
利用bayes概率公式:
E
[
l
o
g
A
(
k
,
l
)
∣
Y
(
k
,
l
)
]
=
E
[
l
o
g
A
(
k
,
l
)
∣
Y
(
k
,
l
)
,
H
1
(
k
,
l
)
]
p
(
k
,
l
)
+
E
[
l
o
g
A
(
k
,
l
)
∣
Y
(
k
,
l
)
,
H
0
(
k
,
l
)
]
(
1
−
p
(
k
,
l
)
)
=
l
o
g
(
G
H
1
∣
Y
(
k
,
l
)
∣
)
p
(
k
,
l
)
+
l
o
g
(
G
m
i
n
∣
Y
(
k
,
l
)
∣
)
(
1
−
p
(
k
,
l
)
)
=
l
o
g
(
(
G
H
1
∣
Y
(
k
,
l
)
∣
)
p
(
k
,
l
)
)
+
l
o
g
(
(
G
m
i
n
∣
Y
(
k
,
l
)
∣
)
(
1
−
p
(
k
,
l
)
)
)
\begin{aligned} \mathbf{E}[logA(k,l)|Y(k,l)]&=\mathbf{E}[logA(k,l)|Y(k,l),H_1(k,l)]p(k,l)\\ &+\mathbf{E}[logA(k,l)|Y(k,l),H_0(k,l)](1-p(k,l))\\ &=log(G_{H_1}|Y(k,l)|)p(k,l)+log(G_{min}|Y(k,l)|)(1-p(k,l))\\ &=log((G_{H_1}|Y(k,l)|)^{p(k,l)})+log((G_{min}|Y(k,l)|)^{(1-p(k,l))}) \end{aligned}
E[logA(k,l)∣Y(k,l)]=E[logA(k,l)∣Y(k,l),H1(k,l)]p(k,l)+E[logA(k,l)∣Y(k,l),H0(k,l)](1−p(k,l))=log(GH1∣Y(k,l)∣)p(k,l)+log(Gmin∣Y(k,l)∣)(1−p(k,l))=log((GH1∣Y(k,l)∣)p(k,l))+log((Gmin∣Y(k,l)∣)(1−p(k,l)))
那么最终的幅度谱估计公式如下。
A
^
(
k
,
l
)
=
e
x
p
{
E
[
l
o
g
A
(
k
,
l
)
∣
Y
(
k
,
l
)
]
}
=
e
x
p
{
l
o
g
(
(
G
H
1
∣
Y
(
k
,
l
)
∣
)
p
(
k
,
l
)
)
+
l
o
g
(
(
G
m
i
n
∣
Y
(
k
,
l
)
∣
)
(
1
−
p
(
k
,
l
)
)
)
}
≜
G
(
k
,
l
)
∣
Y
(
k
,
l
)
∣
\begin{aligned} \hat{A}(k,l)&=exp\big\{\mathbf{E}[logA(k,l)|Y(k,l)] \big\}\\ &=exp\big\{log((G_{H_1}|Y(k,l)|)^{p(k,l)})+log((G_{min}|Y(k,l)|)^{(1-p(k,l))})\big\}\\ &\triangleq G(k,l)|Y(k,l)| \end{aligned}
A^(k,l)=exp{E[logA(k,l)∣Y(k,l)]}=exp{log((GH1∣Y(k,l)∣)p(k,l))+log((Gmin∣Y(k,l)∣)(1−p(k,l)))}≜G(k,l)∣Y(k,l)∣
上式利用了log和exp的技巧,最终得出了新的公式。新公式相对于老公式多了一个
p
(
k
,
l
)
p(k,l)
p(k,l)语音存在概率,那么这个语音存在概率如何获得呢?
语音存在概率
语音出现概率是基于似然比的统计方法得出的,定义这两个假设的被观测信号频域表达:
H
0
(
k
,
l
)
:
Y
(
k
,
l
)
=
D
(
k
,
l
)
H
1
(
k
,
l
)
:
Y
(
k
,
l
)
=
X
(
k
,
l
)
+
D
(
k
,
l
)
\begin{aligned} &H_0(k,l):Y(k,l)=D(k,l)\\ &H_1(k,l):Y(k,l)=X(k,l)+D(k,l) \end{aligned}
H0(k,l):Y(k,l)=D(k,l)H1(k,l):Y(k,l)=X(k,l)+D(k,l)
令
σ
d
2
(
k
,
l
)
=
E
[
∣
D
(
k
,
l
)
∣
2
]
\sigma^2_d(k,l)=E[|D(k,l)|^2]
σd2(k,l)=E[∣D(k,l)∣2]表示噪声的方差,那么在最小均方误差意义上最优的噪声功率密度
σ
d
2
(
k
,
l
)
\sigma^2_d(k,l)
σd2(k,l)的估计表示为:
σ
^
d
2
(
k
,
l
)
=
E
{
σ
d
2
(
k
,
l
)
∣
Y
(
k
,
l
)
}
=
E
{
σ
d
2
(
k
,
l
)
∣
H
0
(
k
,
l
)
}
P
(
H
0
(
k
,
l
)
∣
Y
(
k
,
l
)
)
+
E
{
σ
d
2
(
k
,
l
)
∣
H
1
(
k
,
l
)
}
P
(
H
1
(
k
,
l
)
∣
Y
(
k
,
l
)
)
\begin{aligned} \hat\sigma^2_d(k,l)&=E\{\sigma^2_d(k,l)|Y(k,l)\}\\ &=E\{\sigma^2_d(k,l)|H_0(k,l)\}P(H_0(k,l)|Y(k,l)) + E\{\sigma^2_d(k,l)|H_1(k,l)\}P(H_1(k,l)|Y(k,l)) \end{aligned}
σ^d2(k,l)=E{σd2(k,l)∣Y(k,l)}=E{σd2(k,l)∣H0(k,l)}P(H0(k,l)∣Y(k,l))+E{σd2(k,l)∣H1(k,l)}P(H1(k,l)∣Y(k,l))
其中
P
(
H
0
(
k
,
l
)
∣
Y
(
k
,
l
)
)
P(H_0(k,l)|Y(k,l))
P(H0(k,l)∣Y(k,l))利用贝叶斯条件概率公式可以写成如下形式:
P
(
H
0
(
k
,
l
)
∣
Y
(
k
,
l
)
)
=
P
(
Y
(
k
,
l
)
∣
H
0
(
k
,
l
)
)
P
(
H
0
(
k
,
l
)
)
P
(
Y
(
k
,
l
)
)
=
P
(
Y
(
k
,
l
)
∣
H
0
(
k
,
l
)
)
P
(
H
0
(
k
,
l
)
)
P
(
Y
(
k
,
l
)
∣
H
0
(
k
,
l
)
)
P
(
H
0
(
k
,
l
)
)
+
P
(
Y
(
k
,
l
)
∣
H
1
(
k
,
l
)
)
P
(
H
1
(
k
,
l
)
)
=
1
1
+
r
Λ
(
k
,
l
)
\begin{aligned} P(H_0(k,l)|Y(k,l))&=\frac{P(Y(k,l)|H_0(k,l))P(H_0(k,l))}{P(Y(k,l))}\\ &=\frac{P(Y(k,l)|H_0(k,l))P(H_0(k,l))}{P(Y(k,l)|H_0(k,l))P(H_0(k,l))+P(Y(k,l)|H_1(k,l))P(H_1(k,l))}\\ &=\frac{1}{1+r\Lambda(k,l)} \end{aligned}
P(H0(k,l)∣Y(k,l))=P(Y(k,l))P(Y(k,l)∣H0(k,l))P(H0(k,l))=P(Y(k,l)∣H0(k,l))P(H0(k,l))+P(Y(k,l)∣H1(k,l))P(H1(k,l))P(Y(k,l)∣H0(k,l))P(H0(k,l))=1+rΛ(k,l)1
这里
r
≜
P
(
H
1
(
k
,
l
)
)
P
(
H
0
(
k
,
l
)
)
,
Λ
(
k
,
l
)
≜
P
(
Y
(
k
,
l
)
∣
H
1
(
k
,
l
)
)
P
(
Y
(
k
,
l
)
∣
H
0
(
k
,
l
)
)
r\triangleq\frac{P(H_1(k,l))}{P(H_0(k,l))}, \Lambda(k,l)\triangleq\frac{P(Y(k,l)|H_1(k,l))}{P(Y(k,l)|H_0(k,l))}
r≜P(H0(k,l))P(H1(k,l)),Λ(k,l)≜P(Y(k,l)∣H0(k,l))P(Y(k,l)∣H1(k,l))
同理可以求得:
P
(
H
1
(
k
,
l
)
∣
Y
(
k
,
l
)
)
=
r
Λ
(
k
,
l
)
1
+
r
Λ
(
k
,
l
)
P(H_1(k,l)|Y(k,l))=\frac{r\Lambda(k,l)}{1+r\Lambda(k,l)}
P(H1(k,l)∣Y(k,l))=1+rΛ(k,l)rΛ(k,l)
带回原来的公式
σ
^
d
2
(
k
,
l
)
=
E
{
σ
d
2
(
k
,
l
)
∣
Y
(
k
,
l
)
}
=
1
1
+
r
Λ
(
k
,
l
)
E
{
σ
d
2
(
k
,
l
)
∣
H
0
(
k
,
l
)
}
+
r
Λ
(
k
,
l
)
1
+
r
Λ
(
k
,
l
)
E
{
σ
d
2
(
k
,
l
)
∣
H
1
(
k
,
l
)
}
≈
1
1
+
r
Λ
(
k
,
l
)
∣
Y
(
k
,
l
)
∣
2
+
r
Λ
(
k
,
l
)
1
+
r
Λ
(
k
,
l
)
σ
d
2
(
k
,
l
−
1
)
\begin{aligned} \hat\sigma^2_d(k,l)&=E\{\sigma^2_d(k,l)|Y(k,l)\}\\ &=\frac{1}{1+r\Lambda(k,l)}E\{\sigma^2_d(k,l)|H_0(k,l)\} + \frac{r\Lambda(k,l)}{1+r\Lambda(k,l)}E\{\sigma^2_d(k,l)|H_1(k,l)\}\\ &\approx\frac{1}{1+r\Lambda(k,l)}|Y(k,l)|^2 + \frac{r\Lambda(k,l)}{1+r\Lambda(k,l)}\sigma^2_d(k,l-1) \end{aligned}
σ^d2(k,l)=E{σd2(k,l)∣Y(k,l)}=1+rΛ(k,l)1E{σd2(k,l)∣H0(k,l)}+1+rΛ(k,l)rΛ(k,l)E{σd2(k,l)∣H1(k,l)}≈1+rΛ(k,l)1∣Y(k,l)∣2+1+rΛ(k,l)rΛ(k,l)σd2(k,l−1)
根据观察得出上式最后一项,这样就得出了时间平滑的噪声估计公式。这是语音出现概率
p
(
k
,
l
)
=
r
Λ
(
k
,
l
)
1
+
r
Λ
(
k
,
l
)
p(k,l) = \frac{r\Lambda(k,l)}{1+r\Lambda(k,l)}
p(k,l)=1+rΛ(k,l)rΛ(k,l)
假定噪声的频域估计是一个零均值方差为
λ
d
(
k
)
\lambda_d(k)
λd(k)的复高斯分布,可以得到概率密度函数
p
(
Y
(
k
,
l
)
∣
H
0
(
k
,
l
)
)
=
1
π
λ
d
(
k
)
e
x
p
{
−
Y
2
(
k
,
l
)
λ
d
(
k
)
}
p(Y(k,l)|H_0(k,l)) = \frac{1}{\pi \lambda_d(k)}exp\{-\frac{Y^2(k,l)}{\lambda_d(k)}\}
p(Y(k,l)∣H0(k,l))=πλd(k)1exp{−λd(k)Y2(k,l)}
同理假设噪声和语音是不相关的零均值方差为
λ
x
(
k
)
+
λ
d
(
k
)
\lambda_x(k)+\lambda_d(k)
λx(k)+λd(k)复高斯分布,可以得到概率密度函数
p
(
Y
(
k
,
l
)
∣
H
1
(
k
,
l
)
)
=
1
π
[
λ
x
(
k
)
+
λ
d
(
k
)
]
e
x
p
{
−
Y
2
(
k
,
l
)
λ
x
(
k
)
+
λ
d
(
k
)
}
p(Y(k,l)|H_1(k,l)) = \frac{1}{\pi [\lambda_x(k)+\lambda_d(k)]}exp\{-\frac{Y^2(k,l)}{\lambda_x(k)+\lambda_d(k)}\}
p(Y(k,l)∣H1(k,l))=π[λx(k)+λd(k)]1exp{−λx(k)+λd(k)Y2(k,l)}
Λ
(
k
,
l
)
≜
P
(
Y
(
k
,
l
)
∣
H
1
(
k
,
l
)
)
P
(
Y
(
k
,
l
)
∣
H
0
(
k
,
l
)
)
=
p
(
Y
(
k
,
l
)
∣
H
1
(
k
,
l
)
)
p
(
Y
(
k
,
l
)
∣
H
0
(
k
,
l
)
)
=
1
π
[
λ
x
(
k
)
+
λ
d
(
k
)
]
e
x
p
{
−
Y
2
(
k
,
l
)
λ
x
(
k
)
+
λ
d
(
k
)
}
1
π
λ
d
(
k
)
e
x
p
{
−
Y
2
(
k
,
l
)
λ
d
(
k
)
}
=
λ
d
(
k
)
(
λ
x
(
k
)
+
λ
d
(
k
)
)
e
x
p
{
Y
2
(
k
,
l
)
λ
d
(
k
)
−
Y
2
(
k
,
l
)
λ
x
(
k
)
+
λ
d
(
k
)
}
=
1
1
+
ξ
k
(
l
)
e
x
p
{
λ
x
(
k
)
Y
2
(
k
,
l
)
λ
d
(
k
)
(
λ
x
(
k
)
+
λ
d
(
k
)
)
}
=
1
1
+
ξ
k
(
l
)
e
x
p
{
ξ
k
(
l
)
1
+
ξ
k
(
l
)
γ
k
(
l
)
}
\begin{aligned} \Lambda(k,l)&\triangleq\frac{P(Y(k,l)|H_1(k,l))}{P(Y(k,l)|H_0(k,l))}\\ &= \frac{p(Y(k,l)|H_1(k,l))}{p(Y(k,l)|H_0(k,l))}\\ &= \frac{ \frac{1}{\pi [\lambda_x(k)+\lambda_d(k)]}exp\{-\frac{Y^2(k,l)}{\lambda_x(k)+\lambda_d(k)}\}}{ \frac{1}{\pi \lambda_d(k)}exp\{-\frac{Y^2(k,l)}{\lambda_d(k)}\}} \\ &= \frac{\lambda_d(k) }{ (\lambda_x(k)+\lambda_d(k))} exp\{\frac{Y^2(k,l)}{\lambda_d(k)}-\frac{Y^2(k,l)}{\lambda_x(k)+\lambda_d(k)}\}\\ &= \frac{1 }{ 1+\xi_k(l)} exp\{\frac{\lambda_x(k)Y^2(k,l) }{ \lambda_d(k)(\lambda_x(k)+\lambda_d(k))} \}\\ &= \frac{1 }{ 1+\xi_k(l)} exp\{\frac{\xi_k(l) }{ 1+\xi_k(l)}\gamma_k(l) \}\\ \end{aligned}
Λ(k,l)≜P(Y(k,l)∣H0(k,l))P(Y(k,l)∣H1(k,l))=p(Y(k,l)∣H0(k,l))p(Y(k,l)∣H1(k,l))=πλd(k)1exp{−λd(k)Y2(k,l)}π[λx(k)+λd(k)]1exp{−λx(k)+λd(k)Y2(k,l)}=(λx(k)+λd(k))λd(k)exp{λd(k)Y2(k,l)−λx(k)+λd(k)Y2(k,l)}=1+ξk(l)1exp{λd(k)(λx(k)+λd(k))λx(k)Y2(k,l)}=1+ξk(l)1exp{1+ξk(l)ξk(l)γk(l)} 此时得到
p
(
k
,
l
)
=
r
1
1
+
ξ
k
(
l
)
e
x
p
{
ξ
k
(
l
)
1
+
ξ
k
(
l
)
γ
k
(
l
)
}
1
+
r
1
1
+
ξ
k
(
l
)
e
x
p
{
ξ
k
(
l
)
1
+
ξ
k
(
l
)
γ
k
(
l
)
}
=
{
1
+
q
(
k
,
l
)
1
−
q
(
k
,
l
)
(
1
+
ξ
(
k
,
l
)
)
e
x
p
(
−
v
(
k
,
l
)
)
}
−
1
\begin{aligned} p(k,l) &= \frac{r \frac{1 }{ 1+\xi_k(l)} exp\{\frac{\xi_k(l) }{ 1+\xi_k(l)}\gamma_k(l) \}}{1+r \frac{1 }{ 1+\xi_k(l)} exp\{\frac{\xi_k(l) }{ 1+\xi_k(l)}\gamma_k(l) \}}\\ &= \big \{1+\frac{q(k,l)}{1-q(k,l)}(1+\xi(k,l))exp(-v(k,l)) \big \}^{-1} \end{aligned}
p(k,l)=1+r1+ξk(l)1exp{1+ξk(l)ξk(l)γk(l)}r1+ξk(l)1exp{1+ξk(l)ξk(l)γk(l)}={1+1−q(k,l)q(k,l)(1+ξ(k,l))exp(−v(k,l))}−1
其中
q
(
k
,
l
)
≜
P
(
H
0
(
k
,
l
)
)
q(k,l)\triangleq P(H_0(k,l))
q(k,l)≜P(H0(k,l))被定义为语音不存在的先验概率。
ξ
(
k
,
l
)
≜
λ
x
(
k
,
l
)
/
λ
d
(
k
,
l
)
\xi(k,l)\triangleq\lambda_x(k,l)/\lambda_d(k,l)
ξ(k,l)≜λx(k,l)/λd(k,l)为先验信噪比,
γ
(
k
,
l
)
≜
∣
Y
(
k
,
l
)
∣
2
/
λ
d
(
k
,
l
)
\gamma(k,l)\triangleq|Y(k,l)|^2/\lambda_d(k,l)
γ(k,l)≜∣Y(k,l)∣2/λd(k,l),
v
(
k
,
l
)
≜
γ
(
k
,
l
)
ξ
(
k
,
l
)
/
(
1
+
ξ
(
k
,
l
)
)
v(k,l)\triangleq\gamma(k,l)\xi(k,l)/(1+\xi(k,l))
v(k,l)≜γ(k,l)ξ(k,l)/(1+ξ(k,l))。众所周知,噪声抑制算法除了一个有效的增益消除公式,剩下最难的就是噪声的估计和跟踪了。上面公式内容来解析,这是所谓的先验信噪比估计器和语音缺席概率SAP联合 估计器。作者推导的先验信噪比估计算法为:
ξ
^
(
k
,
l
)
=
α
G
H
1
2
(
k
,
l
−
1
)
γ
(
k
,
l
−
1
)
+
(
1
−
α
)
m
a
x
{
γ
(
k
,
l
)
−
1
,
0
}
\hat{\xi}(k,l)=\alpha G_{H_1}^2(k,l-1)\gamma(k,l-1)+(1-\alpha)max\{\gamma(k,l)-1,0\}
ξ^(k,l)=αGH12(k,l−1)γ(k,l−1)+(1−α)max{γ(k,l)−1,0}
SAP算法论文也给出,偷懒不再敲键盘了。
MCRA是怎么一回事
递归平均
后来cohen又发表Noise Estimation by Minima Controlled Recursive Averaging for Robust Speech Enhancement和Noise Spectrum Estimation in Adverse Environments:Improved Minima Controlled Recursive Averaging了两篇论文,提出并改进了MCRA算法,令噪声估计在低SNR的情况下误差更小。回顾这个算法,首先要了解就是他其实是将最小值统计跟踪和递归平均两种噪声估计的理论进行了有机的整合。继续以上的种种假设,又引入了一个新的假设公式:
Y
(
k
,
l
)
=
∑
n
=
0
N
−
1
y
(
n
+
l
M
)
h
(
n
)
e
−
j
2
π
N
n
k
Y(k,l)=\sum_{n=0}^{N-1}y(n+lM)h(n)e^{-j\frac{2\pi}{N}nk}
Y(k,l)=n=0∑N−1y(n+lM)h(n)e−jN2πnk
前文给出了
H
0
(
k
,
l
)
H_0(k,l)
H0(k,l)和
H
1
(
k
,
l
)
H_1(k,l)
H1(k,l)假设下被观测信号的条件概率密度公式,另外根据加性噪声的假设,定义这两个假设的被观测信号频域表达:
H
0
(
k
,
l
)
:
Y
(
k
,
l
)
=
D
(
k
,
l
)
H
1
(
k
,
l
)
:
Y
(
k
,
l
)
=
X
(
k
,
l
)
+
D
(
k
,
l
)
\begin{aligned} &H_0(k,l):Y(k,l)=D(k,l)\\ &H_1(k,l):Y(k,l)=X(k,l)+D(k,l) \end{aligned}
H0(k,l):Y(k,l)=D(k,l)H1(k,l):Y(k,l)=X(k,l)+D(k,l)
令
λ
d
(
k
,
l
)
=
E
[
∣
D
(
k
,
l
)
∣
2
]
\lambda_d(k,l)=E[|D(k,l)|^2]
λd(k,l)=E[∣D(k,l)∣2]表示噪声的方差,这样一个非常常用的时间循环迭代平滑方法可以用于估计噪声的变化,设定一个新的条件:
H
0
′
(
k
,
l
)
:
λ
^
d
(
k
+
1
,
l
)
=
α
d
λ
^
d
(
k
,
l
)
+
(
1
−
α
d
)
∣
Y
(
k
,
l
)
∣
2
H
1
′
(
k
,
l
)
:
λ
^
d
(
k
+
1
,
l
)
=
λ
^
d
(
k
,
l
)
\begin{aligned} &H^\prime_0(k,l):\hat\lambda_d(k+1,l)=\alpha_d\hat\lambda_d(k,l)+(1-\alpha_d)|Y(k,l)|^2\\ &H^\prime_1(k,l):\hat\lambda_d(k+1,l)=\hat\lambda_d(k,l) \end{aligned}
H0′(k,l):λ^d(k+1,l)=αdλ^d(k,l)+(1−αd)∣Y(k,l)∣2H1′(k,l):λ^d(k+1,l)=λ^d(k,l)
如果假设
p
′
(
k
,
l
)
≜
P
(
H
′
(
k
,
l
)
∣
Y
(
k
,
l
)
)
p^\prime(k,l)\triangleq P(H^\prime(k,l)|Y(k,l))
p′(k,l)≜P(H′(k,l)∣Y(k,l))是在当前
Y
(
k
,
l
)
Y(k,l)
Y(k,l)下的信号出现条件概率,结合上边的假设可以设计出继续信号出现概率的平滑公式如下:
λ
^
d
(
k
+
1
,
l
)
=
λ
^
d
(
k
,
l
)
p
′
(
k
,
l
)
+
(
α
d
λ
^
d
(
k
,
l
)
+
(
1
−
α
d
)
∣
Y
(
k
,
l
)
∣
2
)
(
1
−
p
′
(
k
,
l
)
)
=
α
^
d
λ
^
d
(
k
,
l
)
+
(
1
−
α
^
d
)
∣
Y
(
k
,
l
)
∣
2
\begin{aligned} \hat\lambda_d(k+1,l)&=\hat\lambda_d(k,l)p^\prime(k,l)+(\alpha_d\hat\lambda_d(k,l)+(1-\alpha_d)|Y(k,l)|^2)(1-p^\prime(k,l))\\ &=\hat\alpha_d\hat\lambda_d(k,l)+(1-\hat\alpha_d)|Y(k,l)|^2 \end{aligned}
λ^d(k+1,l)=λ^d(k,l)p′(k,l)+(αdλ^d(k,l)+(1−αd)∣Y(k,l)∣2)(1−p′(k,l))=α^dλ^d(k,l)+(1−α^d)∣Y(k,l)∣2
新的平滑因子加入了信号出现概率的影响,而且是一个时频变化的变量:
α
^
d
(
k
,
l
)
=
α
d
+
(
1
−
α
d
)
p
′
(
k
,
l
)
\hat\alpha_d(k,l)=\alpha_d+(1-\alpha_d)p^\prime(k,l)
α^d(k,l)=αd+(1−αd)p′(k,l)
以上表达式是一个非常典型的基于语音信号出现概率的一阶时间递归方程。这里MCRA的Recursive Average算是有了,那么minima controlled从何谈起呢?答案就在语音信号概率的计算上面。OMLSA的语音出现概率是基于似然比检验得到的,MCRA的语音出现概率是基于带噪语音功率谱与其局部最小值的比来得到的。局部最小值的统计算法
最小值控制的语音出现概率估计
在给定一帧的子带中,这个语音出现概率决定于:附近受噪话音能量(local energy)与世间窗口中最小能量的比值。这个local energy通过平滑时频窗口的STFT幅度平方来获取,具体方法是先定义一个窗函数获取当前频率的附近能量:
S
f
(
k
,
l
)
=
∑
i
=
−
w
w
b
(
i
)
∣
Y
(
k
−
i
,
l
)
∣
2
S_f(k,l)= \sum_{i=-w}^wb(i)|Y(k-i,l)|^2
Sf(k,l)=i=−w∑wb(i)∣Y(k−i,l)∣2
而时域的平滑采用一阶递归方程:
S
(
k
,
l
)
=
α
s
S
(
k
,
l
−
1
)
+
(
1
−
α
s
)
S
f
(
k
,
l
)
S(k,l)=\alpha_sS(k,l-1)+(1-\alpha_s)S_f(k,l)
S(k,l)=αsS(k,l−1)+(1−αs)Sf(k,l)
论文里还有用到贝叶斯判决规则来证明这个比例是单调的,但似乎多此一举,估计和跟踪基本方法在【4】中很好地归纳,先定义最小值搜索的伪代码:
if mod(l/L)=0
S_min(k,l) = min(S_tmp(k,l-1),S(k,l))
S_tmp(k,l) = S_f(k,l)
else
S_min(k,l) = min(S_min(k,l-1),S(k,l))
S_tmp(k,l) = min(S_tmp(k,l-1),S(k,l))
end
这个算法还有一些优化的改进,一般窗口取0.8-1.4s左右。
有了最小值搜索算法,MCRA的整个流程大致梳理如下:
- 先计算 S f ( k , l ) S_f(k,l) Sf(k,l)和平滑后的 S ( k , l ) S(k,l) S(k,l)。
- 启用 最小值搜索算法来得到 S m i n ( k , l ) S_{min}(k,l) Smin(k,l)
- 计算
S
r
(
k
,
l
)
=
S
(
k
,
l
)
S
m
i
n
(
k
,
l
)
S_r(k,l)=\frac{S(k,l)}{S_{min}(k,l)}
Sr(k,l)=Smin(k,l)S(k,l),并根据阈值
δ
\delta
δ进行比较
i f S r ( k , l ) > δ p ( k , l ) = 1 语音存在 e l s e p ( k , l ) = 0 语音存在 if \ \ S_r(k,l) > \delta\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ p(k,l) = 1 \text{语音存在}\\ else\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ p(k,l) = 0 \text{语音存在} if Sr(k,l)>δ p(k,l)=1语音存在else p(k,l)=0语音存在 - 再次建立一个一阶递归方程 p ^ ( k , l ) = α p p ^ ( k , l − 1 ) + ( 1 − α p ) p ( k , l ) \hat p(k,l)=\alpha_p \hat p(k,l-1) + (1-\alpha_p)p(k,l) p^(k,l)=αpp^(k,l−1)+(1−αp)p(k,l)得出的估计值作为 p ′ ( k , l ) p^\prime(k,l) p′(k,l)带入 α ^ d ( k , l ) = α d + ( 1 − α d ) p ′ ( k , l ) \hat\alpha_d(k,l)=\alpha_d+(1-\alpha_d)p^\prime(k,l) α^d(k,l)=αd+(1−αd)p′(k,l)得出 α ^ d ( k , l ) \hat\alpha_d(k,l) α^d(k,l)
- 利用 α ^ d ( k , l ) \hat\alpha_d(k,l) α^d(k,l)求得新的噪声估计 λ ^ d ( k + 1 , l ) = α ^ d λ ^ d ( k , l ) + ( 1 − α ^ d ) ∣ Y ( k , l ) ∣ 2 \hat\lambda_d(k+1,l)=\hat\alpha_d\hat\lambda_d(k,l)+(1-\hat\alpha_d)|Y(k,l)|^2 λ^d(k+1,l)=α^dλ^d(k,l)+(1−α^d)∣Y(k,l)∣2
从MCRA到IMCRA
MCRA-2
这时罗爱洲提出的对MCRA的改进,主要是替换了最小值搜索算法,其次是对
δ
\delta
δ门限做出了频率相关的改进,最小值搜索采用了连续频谱最小值跟踪,伪代码如下:
i
f
S
m
i
n
(
k
,
l
−
1
)
<
S
(
k
,
l
)
S
m
i
n
(
k
,
l
)
=
γ
S
m
i
n
(
k
,
l
−
1
)
+
1
−
γ
1
−
β
(
S
(
k
,
l
)
−
β
S
(
k
,
l
−
1
)
)
e
l
s
e
S
m
i
n
(
k
,
l
)
=
S
(
k
,
l
)
e
n
d
\begin{aligned} &if S_{min}(k,l-1) < S(k,l)\\ &\ \ \ \ S_{min}(k,l) = \gamma S_{min}(k,l-1)+\frac{1-\gamma}{1-\beta}(S(k,l)-\beta S(k,l-1)) \\ &else\\ &\ \ \ \ S_{min}(k,l) = S(k,l)\\ &end\\ \end{aligned}
ifSmin(k,l−1)<S(k,l) Smin(k,l)=γSmin(k,l−1)+1−β1−γ(S(k,l)−βS(k,l−1))else Smin(k,l)=S(k,l)end
这里对
α
d
\alpha_d
αd提出了0.7~0.9的取值范围,
b
e
t
a
=
0.96
,
γ
=
0.998
beta=0.96, \gamma=0.998
beta=0.96,γ=0.998作为经验值。而对
δ
\delta
δ按照频率调整的公式如下:
δ
(
k
)
=
{
2
,
1
≤
k
≤
L
F
2
,
L
F
≤
k
≤
M
F
5
,
M
F
≤
k
≤
F
s
/
2
\delta(k) = \begin{cases} 2 ,1 \leq k \leq LF \\ 2 ,LF \leq k \leq MF \\ 5, MF \leq k \leq Fs/2 \end{cases}
δ(k)=⎩⎪⎨⎪⎧2,1≤k≤LF2,LF≤k≤MF5,MF≤k≤Fs/2这里
L
F
=
1
k
H
z
,
M
F
=
3
k
H
z
,
F
s
=
N
y
q
u
s
t
F
r
e
q
LF=1kHz,MF=3kHz,Fs=NyqustFreq
LF=1kHz,MF=3kHz,Fs=NyqustFreq,具体的取值也可以根据应用场景噪声调整。
下图是一个频率分量上的噪声跟踪图,从实际对比来看,其实两者差距并不大,各有所唱吧
IMCRA
水到渠成,懒得写了,最后贴一张图,比较一下三种跟踪方法在omlsa加持下的对比。从语音存在概率来看,imcra确实很优秀,但最终结果其实mcra好像反而更稳。
参考:
1.Optimal Speech Enhancement Under Signal Presence Uncertainty Using Log-Spectral Amplitude Estimator 和
2.Noise Estimation by Minima Controlled Recursive Averaging for Robust Speech Enhancement
3.关于 IMCRA+OMLSA 语音降噪算法的详细解释
4.语音增强理论与实践, (美)罗爱洲, (译)高毅等