博文更改记录
时间 | 版本号 | 更改内容 |
---|---|---|
2022年4月12日 15:20 | V1.0 | 新建博文 |
2022年4月20日 13:20 | V1.1 | 完成初稿,并发布该博文,初稿内容包含相关背景概述,及相关LDPC译码理论公式推导。 |
2022年4月21日 9:08 | v1.2 | 补充2.4.2小节AWGN信道下的初始化的相关内容 |
2022年5月1日 14:25 | v1.3 | 补充3.3节和3.4节关于归一化最小和译码算法和分层最小和译码算法等内容 |
2022年5月20日 11:26 | V1.4 | 补充2.3.5小节关于改进的最小和译码算法,增加LBP算法和NMSA收敛速度比较的Monte Carlo仿真matlab代码 |
一、背景概述
低密度奇偶校验码(Low-Density Parity-Check Codes)凭借其逼近Shannon限的纠错性能,受到了信道编译码学者的广泛关注,已成为DVB-S2、IEEE802.16e、CCSDS、5G等无线通信标准首选的信道编码方案。2001年,S.Chung等人的研究结果表明,码率1/2、码长10^7的非规则LDPC码在AWGN信道下采用置信传播算法进行迭代译码,当错误概率为10 ^-6时,距离Shnnon限仅相差0.0045dB。
LDPC码最迷人的地方在于译码算法,本文主要关注基于置信传播的迭代译码算法,博主在初学时,在阅读论文和相关书籍过程中,发现关于LDPC译码理论方面的讲解,公式推导讲的不是很清楚,需要阅读大量的书籍和文献,才能对其有一个系统性的认识;且关于LDPC译码matlab代码实现没有详尽的注释,理解起来较为困难,入门较难,花费较多时间;本文主要介绍基于置信传播的迭代译码算法及其改进算法,对涉及的公式,进行了详细的推导,并附上自己的理解;除此之外给出了matlab实现代码。因本文内容较多,难免会出现错误,还请各位读者批评指正,希望这篇博文能对大家有所帮助和启发。
二、LDPC译码理论
2.1 LDPC码的表示方法
2.1.1LDPC码的矩阵表示
一个LDPC码
v
v
v 是一种(
n
n
n,
k
k
k)线性分组码,码长为
n
n
n,信息序列长度为
k
k
k,可由其校验矩阵H所唯一确定,校验矩阵中1的数目远小于0的数目,具有稀疏性。H的维数是
m
×
n
m\times n
m×n,每一行对应一个校验方程(也称校验节点),每一列对应码字的一位(也称变量节点)。每一行中非零元素的个数称为行重,每一列中非零元素的个数称为列重。
下面是一个5
×
\times
× 10的校验矩阵及其对应的校验方程:
H
=
[
1
0
0
0
0
1
1
0
0
1
0
1
1
0
0
1
0
0
1
0
1
0
0
0
1
0
0
0
1
1
0
0
1
1
1
0
0
1
0
0
0
1
0
1
0
0
1
1
0
0
]
→
{
v
0
+
v
5
+
v
6
+
v
9
=
0
v
1
+
v
2
+
v
5
+
v
8
=
0
v
0
+
v
4
+
v
8
+
v
9
=
0
v
2
+
v
3
+
v
4
+
v
7
=
0
v
1
+
v
3
+
v
6
+
v
7
=
0
H=\left[ \begin{matrix} 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\ \end{matrix} \right]\to \left\{ \begin{matrix} {{v}_{0}}+{{v}_{5}}+{{v}_{6}}+{{v}_{9}}=0 \\ {{v}_{1}}+{{v}_{2}}+{{v}_{5}}+{{v}_{8}}=0 \\ {{v}_{0}}+{{v}_{4}}+{{v}_{8}}+{{v}_{9}}=0 \\ {{v}_{2}}+{{v}_{3}}+{{v}_{4}}+{{v}_{7}}=0 \\ {{v}_{1}}+{{v}_{3}}+{{v}_{6}}+{{v}_{7}}=0 \\ \end{matrix} \right.
H=
10100010010101000011001101100010001000110110010100
→⎩
⎨
⎧v0+v5+v6+v9=0v1+v2+v5+v8=0v0+v4+v8+v9=0v2+v3+v4+v7=0v1+v3+v6+v7=0
2.1.2 Tanner图表示
除了用校验矩阵表示LDPC码以外,还可以用双向的图模型表示LDPC码,其中Tanner图表示是比较方便的一种,可以形象地刻画LDPC码的编译码特性。当H矩阵中对应位置不为0时,在Tanner中有一条边将对应的校验节点和变量节点相连。校验矩阵的行重与列重与节点的度一致,Tanner与校验矩阵一一对应。
下图所展示为5
×
\times
× 10的校验矩阵对应的Tanner图:
2.2符号说明
符号 | 含义 |
---|---|
v i v_i vi | 第 i i i个变量节点 |
c j c_j cj | 第 j j j个校验节点 |
r j i ( l ) ( b ) r_{ji}^{(l)}(b) rji(l)(b) | 第 l l l次迭代校验节点 j j j传递给变量节点 i i i的外信息, b = 0 , 1 b=0,1 b=0,1 |
q i j ( l ) ( b ) q_{ij}^{(l)}(b) qij(l)(b) | 第 l l l次迭代变量节点 i i i传递给校验节点 j j j的外信息, b = 0 , 1 b=0,1 b=0,1 |
C ( i ) C(i) C(i) | 与第 i i i个变量节点相连的所有校验节点集合, C ( i ) = { i : h i j = 1 } C(i)=\{i:h_{ij}=1\} C(i)={i:hij=1} |
V ( j ) V(j) V(j) | 与第 j j j校验节点相连的所有变量节点集合, V ( j ) = { j : h i j = 1 } V(j)=\{j:h_{ij}=1\} V(j)={j:hij=1} |
C ( i ) \ j C\left( i \right)\backslash j C(i)\j | 除第 j j j个校验节点外,与第 i i i个变量节点相连的其他校验节点集合, C ( i ) \ j = { k : h i k = 1 , k ≠ j } C\left( i \right)\backslash j=\{k:h_{ik}=1,k\ne j\} C(i)\j={k:hik=1,k=j} |
V ( j ) \ i V\left( j \right)\backslash i V(j)\i | 除第 i i i个变量节点外,与第 j j j个校验节点相连的其他变量节点集合, V ( j ) \ i = { k : h k j = 1 , k ≠ i } V\left( j\right)\backslash i=\{k:h_{kj}=1,k\ne i\} V(j)\i={k:hkj=1,k=i} |
P i ( b ) P_i(b) Pi(b) | 接收端收到 y i y_i yi后,对应发送端码字 c i = b c_i=b ci=b的后验概率, b = 0 , 1 b=0,1 b=0,1 |
q i ( l ) ( b ) q_i^{(l)}(b) qi(l)(b) | 第 l l l次迭代第 i i i个变量节点的后验概率信息, b = 0 , 1 b=0,1 b=0,1 |
不同的文献对内信息和外信息的定义不一样,有的文献把从信道接收来的信息称为外信息( e x t e r n a l m e s s a g e s external\ messages external messages),把节点之间传递的信息称为内信息( i n t e r n a l m e s s a g e s internal\ messages internal messages);有的文献把从信道接收来的信息称为后验概率,把节点之间传递的信息称为外信息;本博文定义遵照后者。
2.3LDPC译码算法
LDPC码的译码方法可以分为两大类:基于硬判决的译码和基于软判决的译码。基于硬判决的译码运算量小比较实用;而软判决译码采用了后验概率信息,并通过迭代运算,使得LDPC码的性能得以逼近香农限。本博文主要关注基于软判决的译码及其改进算法。
2.3.1引理1及其证明
在学习LDPC译码相关的知识时,都会涉及到引理1,该引理关系具体LDPC译码算法,但很少有文献给出关于引理的证明,本博文关于引理的证明,主要参考了博主stand_by_me123的博文。1
引理1:对于一个长为
m
m
m的相互独立的二进制序列,
a
=
[
a
1
,
a
2
,
.
.
.
,
a
m
]
a=[a_1,a_2,...\ ,a_m]
a=[a1,a2,... ,am],第
h
h
h个比特为1的概率
P
(
a
h
)
=
p
h
P(a_h)=p_h
P(ah)=ph,那么二进制序列
a
a
a中1的个数为偶数的概率为
1
2
+
1
2
∏
h
=
1
m
(
1
−
2
p
h
)
\frac{1}{2}+\frac{1}{2}\prod\limits_{h=1}^{m}{(1-2{{p}_{h}})}
21+21h=1∏m(1−2ph),1的个数为奇数的概率为
1
2
−
1
2
∏
h
=
1
m
(
1
−
2
p
h
)
\frac{1}{2}-\frac{1}{2}\prod\limits_{h=1}^{m}{(1-2{{p}_{h}})}
21−21h=1∏m(1−2ph)。
引理1的证明:(构造函数+数学归纳法)
首先构造关于
t
t
t的
m
m
m次多项式
f
(
t
)
=
∏
h
=
1
m
(
1
−
p
h
+
p
h
t
)
f(t)=\prod\limits_{h=1}^{m}{(1-{{p}_{h}}+{{p}_{h}}t)}
f(t)=h=1∏m(1−ph+pht),通过数学归纳法证明
t
i
t^i
ti的系数为其序列中包含
i
i
i个1的概率。
(1)当
m
=
1
m=1
m=1,
f
(
t
)
=
1
−
p
h
+
p
h
t
f(t)=1-p_h+p_ht
f(t)=1−ph+pht,显然
t
t
t的一次幂及零次幂即为包含1个1和0个1的概率;
(2)假设当
m
=
k
m=k
m=k时,
t
h
t^h
th的系数
B
h
B_h
Bh为
k
k
k次多项式
f
(
t
)
=
∏
h
=
1
k
(
1
−
p
h
+
p
h
t
)
=
∑
h
=
0
k
B
h
t
h
f(t)=\prod\limits_{h=1}^{k}{(1-{{p}_{h}}+{{p}_{h}}t)=\sum\limits_{h=0}^{k}{{{B}_{h}}{{t}^{h}}}}
f(t)=h=1∏k(1−ph+pht)=h=0∑kBhth中包含
h
h
h个1个概率;
(3)当
m
=
k
+
1
m=k+1
m=k+1时,
f
(
t
)
=
∏
h
=
1
k
+
1
(
1
−
p
h
+
p
h
t
)
=
(
1
−
p
k
+
1
+
p
k
+
1
t
)
∏
h
=
1
k
(
1
−
p
h
+
p
h
t
)
=
(
1
−
p
k
+
1
+
p
k
+
1
t
)
∑
h
=
0
k
B
h
t
h
f(t)=\prod\limits_{h=1}^{k+1}{(1-{{p}_{h}}+{{p}_{h}}t)=(1-{{p}_{k+1}}+{{p}_{k+1}}t)\prod\limits_{h=1}^{k}{(1-{{p}_{h}}+{{p}_{h}}t)}=(1-{{p}_{k+1}}+{{p}_{k+1}}t)\sum\limits_{h=0}^{k}{{{B}_{h}}{{t}^{h}}}}
f(t)=h=1∏k+1(1−ph+pht)=(1−pk+1+pk+1t)h=1∏k(1−ph+pht)=(1−pk+1+pk+1t)h=0∑kBhth
由上式可求得
t
i
t^i
ti的系数为
(
1
−
p
k
+
1
)
B
i
+
p
k
+
1
B
i
−
1
(1-p_{k+1})B_i+p_{k+1}B_{i-1}
(1−pk+1)Bi+pk+1Bi−1,即为
k
+
1
k+1
k+1次多项式中包含
i
i
i个1的概率。
对于函数
g
(
t
)
=
∏
h
=
1
m
(
1
−
p
h
−
p
h
t
)
g(t)=\prod\limits_{h=1}^m(1-p_h-p_ht)
g(t)=h=1∏m(1−ph−pht),与
f
(
t
)
f(t)
f(t)的区别在于函数
g
(
t
)
g(t)
g(t)的奇次幂系数为负值。故可通过
f
(
t
)
+
g
(
t
)
f(t)+g(t)
f(t)+g(t)来消去幂次为奇数的项,并令
t
=
1
t=1
t=1。
故可得到二进制序列
a
a
a中,1的个数为偶数的概率为:
p
e
v
e
n
=
1
+
∏
h
=
1
m
(
1
−
2
p
h
)
2
p_{even}=\frac{1+\prod\limits_{h=1}^m(1-2p_h)}{2}
peven=21+h=1∏m(1−2ph),1的个数为奇数的概率为:
p
o
d
d
=
1
−
p
e
v
e
n
=
1
−
∏
h
=
1
m
(
1
−
2
p
h
)
2
p_{odd}=1-p_{even}=\frac{1-\prod\limits_{h=1}^m(1-2p_{h})}{2}
podd=1−peven=21−h=1∏m(1−2ph)。
2.3.2和积译码算法(概率域译码算法)
(1)计算发送端发送的比特
v
i
=
1
v_i=1
vi=1或0的初始后验概率,设定变量节点传递给校验节点的初始消息为发送端发送比特的初始后验概率:
{
q
i
j
(
0
)
(
1
)
=
P
i
(
1
)
=
p
(
v
i
=
1
∣
y
i
)
q
i
j
(
0
)
(
0
)
=
P
i
(
0
)
=
p
(
v
i
=
0
∣
y
i
)
\left\{ \begin{matrix} q_{ij}^{(0)}(1)={{P}_{i}}(1)=p({{v}_{i}}=1|{{y}_{i}}) \\ q_{ij}^{(0)}(0)={{P}_{i}}(0)=p({{v}_{i}}=0|{{y}_{i}}) \\ \end{matrix} \right.
{qij(0)(1)=Pi(1)=p(vi=1∣yi)qij(0)(0)=Pi(0)=p(vi=0∣yi)
(2)计算校验节点传递给变量的外信息:
{
r
j
i
(
l
)
(
0
)
=
1
2
+
1
2
∏
i
′
∈
V
(
j
)
\
i
(
1
−
2
q
i
′
j
(
l
−
1
)
(
1
)
)
r
j
i
(
l
)
(
1
)
=
1
−
r
j
i
(
0
)
=
1
2
−
1
2
∏
i
′
∈
V
(
j
)
\
i
(
1
−
2
q
i
′
j
(
l
−
1
)
(
1
)
)
\left\{ \begin{matrix} r_{ji}^{(l)}(0)=\frac{1}{2}+\frac{1}{2}\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{(1-2q_{{{i}^{'}}j}^{(l-1)}(1))} \\ r_{ji}^{(l)}(1)=1-{{r}_{ji}}(0)=\frac{1}{2}-\frac{1}{2}\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{(1-2q_{{{i}^{'}}j}^{(l-1)}(1))} \\ \end{matrix} \right.
⎩
⎨
⎧rji(l)(0)=21+21i′∈V(j)\i∏(1−2qi′j(l−1)(1))rji(l)(1)=1−rji(0)=21−21i′∈V(j)\i∏(1−2qi′j(l−1)(1))
理解:首先理解校验节点传递给变量节点外信息的含义:对于校验节点
c
j
c_j
cj(即代表一个校验方程),与其连接的一个变量节点
v
i
v_i
vi的值确定,且除
v
i
v_i
vi外的其他变量节点为0或1的概率已知的条件下,第
j
j
j个校验方程满足的概率。
若变量节点
v
i
v_i
vi为0,则第
j
j
j个校验方程满足的条件为其他变量节点中有偶数个1。(校验方程的计算法则为模2运算)
若变量节点
v
i
v_i
vi为1,则第
j
j
j个校验方程满足的条件为其他变量节点中有奇数个1。
一个序列中有偶数个1或奇数1的概率计算公式由引理1给出。
(3)计算变量节点传递给校验节点的外信息:
{
q
i
j
(
l
)
(
0
)
=
K
i
j
q
i
j
(
0
)
(
0
)
∏
j
′
∈
C
(
i
)
\
j
r
j
′
i
(
l
)
(
0
)
q
i
j
(
l
)
(
1
)
=
K
i
j
q
i
j
(
0
)
(
1
)
∏
j
′
∈
C
(
i
)
\
j
r
j
′
i
(
l
)
(
1
)
\left\{ \begin{matrix} q_{ij}^{(l)}(0)={{K}_{ij}}q_{ij}^{(0)}(0)\prod\limits_{{{j}^{'}}\in C(i)\backslash j}{r_{{{j}^{'}}i}^{(l)}}(0) \\ q_{ij}^{(l)}(1)={{K}_{ij}}q_{ij}^{(0)}(1)\prod\limits_{{{j}^{'}}\in C(i)\backslash j}{r_{{{j}^{'}}i}^{(l)}}(1) \\ \end{matrix} \right.
⎩
⎨
⎧qij(l)(0)=Kijqij(0)(0)j′∈C(i)\j∏rj′i(l)(0)qij(l)(1)=Kijqij(0)(1)j′∈C(i)\j∏rj′i(l)(1)
理解:对于变量节点传递给校验节点外信息的含义:当变量节点
v
i
v_i
vi连接的一个校验节点
c
j
c_j
cj的值确定(即第
j
j
j个校验方程确定为0或1),与
v
i
v_i
vi相连的其他校验节点所在的校验方程能得到满足的概率已知,此时变量节点
v
i
=
1
v_i=1
vi=1或
0
0
0的概率。
v
i
=
1
v_i=1
vi=1的概率包括:初始为1的概率、为1时使得其他校验方程满足时的概率,这些概率值相乘得到了
v
i
=
1
v_i=1
vi=1的概率;
v
i
=
0
v_i=0
vi=0同理。
(4)译码判决,计算经过迭代后变量节点的后验概率:
{
q
i
j
(
l
)
(
0
)
=
K
i
j
q
i
j
(
0
)
(
0
)
∏
j
′
∈
C
(
i
)
r
j
′
i
(
l
)
(
0
)
q
i
j
(
l
)
(
1
)
=
K
i
j
q
i
j
(
0
)
(
1
)
∏
j
′
∈
C
(
i
)
r
j
′
i
(
l
)
(
1
)
\left\{ \begin{matrix} {{q}_{ij}}^{(l)}(0)={{K}_{ij}}{{q}_{ij}}^{(0)}(0){{\prod\limits_{{{j}^{'}}\in C(i)}{{{r}_{{{j}^{'}}i}}}}^{(l)}}(0) \\ {{q}_{ij}}^{(l)}(1)={{K}_{ij}}{{q}_{ij}}^{(0)}(1){{\prod\limits_{{{j}^{'}}\in C(i)}{{{r}_{{{j}^{'}}i}}}}^{(l)}}(1) \\ \end{matrix} \right.
⎩
⎨
⎧qij(l)(0)=Kijqij(0)(0)j′∈C(i)∏rj′i(l)(0)qij(l)(1)=Kijqij(0)(1)j′∈C(i)∏rj′i(l)(1)
0的概率大时,将
v
i
v_i
vi判决为0,否则判决为1。
2.3.3对数域BP译码算法(LLR BP)
概率域上的BP算法包含了大量的连乘运算,硬件实现时具有较高的计算复杂度,资源消耗大。将概率信息用似然比表示,就得到对数域上的BP算法,大量的乘法运算可以转化为加法运算。
(1)信道初始消息:
L
(
P
i
)
=
ln
P
i
(
0
)
P
i
(
1
)
=
ln
p
(
v
i
=
0
∣
y
i
)
p
(
v
i
=
1
∣
y
i
)
L({{P}_{i}})=\ln \frac{{{P}_{i}}(0)}{{{P}_{i}}(1)}=\ln \frac{p({{v}_{i}}=0|{{y}_{i}})}{p({{v}_{i}}=1|{{y}_{i}})}
L(Pi)=lnPi(1)Pi(0)=lnp(vi=1∣yi)p(vi=0∣yi)
(2)校验节点传递给变量节点的外信息:
r
j
i
=
ln
r
j
i
(
0
)
r
j
i
(
1
)
=
ln
1
+
∏
i
′
∈
V
(
j
)
\
i
(
1
−
2
q
i
′
j
(
1
)
)
1
−
∏
i
′
∈
V
(
j
)
\
i
(
1
−
2
q
i
′
j
(
1
)
)
{{r}_{ji}}=\ln \frac{{{r}_{ji}}(0)}{{{r}_{ji}}(1)}=\ln \frac{1+\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{(1-2{{q}_{{{i}^{'}}j}}(1))}}{1-\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{(1-2{{q}_{{{i}^{'}}j}}(1))}}
rji=lnrji(1)rji(0)=ln1−i′∈V(j)\i∏(1−2qi′j(1))1+i′∈V(j)\i∏(1−2qi′j(1))
注意到:
tanh
−
1
=
1
2
ln
1
+
x
1
−
x
{{\tanh }^{-1}}=\frac{1}{2}\ln \frac{1+x}{1-x}
tanh−1=21ln1−x1+x,
1
−
2
q
i
′
j
(
1
)
=
q
i
′
j
(
0
)
−
q
i
′
j
(
1
)
1-2{{q}_{{{i}^{'}}j}}(1)={{q}_{{{i}^{'}}j}}(0)-{{q}_{{{i}^{'}}j}}(1)
1−2qi′j(1)=qi′j(0)−qi′j(1),且
q
i
′
j
(
0
)
+
q
i
′
j
(
1
)
=
1
{{q}_{{{i}^{'}}j}}(0)+{{q}_{{{i}^{'}}j}}(1)=1
qi′j(0)+qi′j(1)=1,
q
i
j
=
ln
q
i
j
(
0
)
q
i
j
(
1
)
{{q}_{ij}}=\ln \frac{{{q}_{ij}}(0)}{{{q}_{ij}}(1)}
qij=lnqij(1)qij(0),
tanh
(
x
)
=
e
x
−
e
−
x
e
x
+
e
−
x
\tanh (x)=\frac{{{e}^{x}}-{{e}^{-x}}}{{{e}^{x}}+{{e}^{-x}}}
tanh(x)=ex+e−xex−e−x,故可利用这几个公式对
r
j
i
r_{ji}
rji进行化简。
希望读者可以静下心仔细推,不要惧怕公式。
r
j
i
=
2
t
a
n
h
−
1
(
∏
i
′
∈
V
(
j
)
\
i
(
1
−
2
q
i
′
j
(
1
)
)
)
r_{ji}=2tanh^{-1}(\prod\limits_{i^{'}\in V(j)\backslash i}(1-2q_{i^{'}j}(1)))
rji=2tanh−1(i′∈V(j)\i∏(1−2qi′j(1)))
=
2
t
a
n
h
−
1
(
∏
i
′
∈
V
(
j
)
\
i
(
q
i
′
j
(
0
)
−
q
i
′
j
(
1
)
)
)
=2tanh^{-1}(\prod\limits_{i^{'}\in V(j)\backslash i}(q_{i^{'}j}(0)-q_{i^{'}j}(1)))
=2tanh−1(i′∈V(j)\i∏(qi′j(0)−qi′j(1)))
=
2
t
a
n
h
−
1
(
∏
i
′
∈
V
(
j
)
\
i
(
q
i
′
j
(
0
)
−
q
i
′
j
(
1
)
q
i
′
j
(
0
)
+
q
i
′
j
(
1
)
)
)
=2tanh^{-1}(\prod\limits_{i^{'}\in V(j)\backslash i}(\frac{q_{i^{'}j}(0)-q_{i^{'}j}(1)}{q_{i^{'}j}(0)+q_{i^{'}j}(1)}))
=2tanh−1(i′∈V(j)\i∏(qi′j(0)+qi′j(1)qi′j(0)−qi′j(1)))
=
2
t
a
n
h
−
1
(
∏
i
′
∈
V
(
j
)
\
i
(
q
i
′
j
(
0
)
q
i
′
j
(
1
)
−
1
q
i
′
j
(
0
)
q
i
′
j
(
1
)
+
1
)
)
=2tanh^{-1}(\prod\limits_{i^{'}\in V(j)\backslash i}(\frac{\frac{q_{i^{'}j}(0)}{q_{i^{'}j}(1)}-1}{\frac{q_{i^{'}j}(0)}{q_{i^{'}j}(1)}+1}))
=2tanh−1(i′∈V(j)\i∏(qi′j(1)qi′j(0)+1qi′j(1)qi′j(0)−1))
=
2
t
a
n
h
−
1
(
∏
i
′
∈
V
(
j
)
\
i
(
e
q
i
′
j
−
1
e
q
i
′
j
+
1
)
)
=2tanh^{-1}(\prod\limits_{i^{'}\in V(j)\backslash i}(\frac{e^{q_{i^{'}j}}-1}{e^{q_{i^{'}j}}+1}))
=2tanh−1(i′∈V(j)\i∏(eqi′j+1eqi′j−1))
=
2
t
a
n
h
−
1
(
∏
i
′
∈
V
(
j
)
\
i
(
e
q
i
′
j
2
(
e
q
i
′
j
2
−
e
−
q
i
′
j
2
)
e
q
i
′
j
2
(
e
q
i
′
j
2
+
e
−
q
i
′
j
2
)
)
)
=2tanh^{-1}(\prod\limits_{i^{'}\in V(j)\backslash i}(\frac{e^{\frac{q_{i^{'}j}}{2}}(e^{\frac{q_{i^{'}j}}{2}}-e^{-\frac{q_{i^{'}j}}{2}})}{e^{\frac{q_{i^{'}j}}{2}}(e^{\frac{q_{i^{'}j}}{2}}+e^{-\frac{q_{i^{'}j}}{2}})}))
=2tanh−1(i′∈V(j)\i∏(e2qi′j(e2qi′j+e−2qi′j)e2qi′j(e2qi′j−e−2qi′j)))
=
2
t
a
n
h
−
1
(
∏
i
′
∈
V
(
j
)
\
i
(
e
q
i
′
j
2
−
e
−
q
i
′
j
2
e
q
i
′
j
2
+
e
−
q
i
′
j
2
)
)
=2tanh^{-1}(\prod\limits_{i^{'}\in V(j)\backslash i}(\frac{e^{\frac{q_{i^{'}j}}{2}}-e^{-\frac{q_{i^{'}j}}{2}}}{e^{\frac{q_{i^{'}j}}{2}}+e^{-\frac{q_{i^{'}j}}{2}}}))
=2tanh−1(i′∈V(j)\i∏(e2qi′j+e−2qi′je2qi′j−e−2qi′j))
=
2
t
a
n
h
−
1
(
∏
i
′
∈
V
(
j
)
\
i
t
a
n
h
q
i
′
j
2
)
=2tanh^{-1}(\prod\limits_{i^{'}\in V(j)\backslash i}tanh\frac{q_{i^{'}j}}{2})
=2tanh−1(i′∈V(j)\i∏tanh2qi′j)
故校验节点传递给变量节点的信息可以化简为:
r
j
i
=
2
t
a
n
h
−
1
(
∏
i
′
∈
V
(
j
)
\
i
t
a
n
h
q
i
′
j
2
)
r_{ji}=2tanh^{-1}(\prod\limits_{i^{'}\in V(j)\backslash i}tanh\frac{q_{i^{'}j}}{2})
rji=2tanh−1(i′∈V(j)\i∏tanh2qi′j)
(3)变量节点传递给校验节点的外信息:
q
i
j
=
ln
P
i
(
0
)
∏
j
′
∈
C
(
i
)
\
j
r
j
′
i
(
0
)
P
i
(
1
)
∏
j
′
∈
C
(
i
)
\
j
r
j
′
i
(
1
)
=
L
(
P
i
)
+
∑
j
′
∈
C
(
i
)
\
j
r
j
′
i
{{q}_{ij}}=\ln \frac{{{P}_{i}}(0)\prod\limits_{{{j}^{'}}\in C(i)\backslash j}{{{r}_{{{j}^{'}}i}}}(0)}{{{P}_{i}}\left( 1 \right)\prod\limits_{{{j}^{'}}\in C(i)\backslash j}{{{r}_{{{j}^{'}}i}}}(1)}=L({{P}_{i}})+\sum\limits_{{{j}^{'}}\in C(i)\backslash j}{{{r}_{{{j}^{'}}i}}}
qij=lnPi(1)j′∈C(i)\j∏rj′i(1)Pi(0)j′∈C(i)\j∏rj′i(0)=L(Pi)+j′∈C(i)\j∑rj′i
(4)译码判决,经过迭代后变量节点的后验概率相应的修改为:
q
i
j
=
ln
P
i
(
0
)
∏
j
′
∈
C
(
i
)
r
j
′
i
(
0
)
P
i
(
1
)
∏
j
′
∈
C
(
i
)
r
j
′
i
(
1
)
=
L
(
P
i
)
+
∑
j
′
∈
C
(
i
)
r
j
′
i
{{q}_{ij}}=\ln \frac{{{P}_{i}}(0)\prod\limits_{{{j}^{'}}\in C(i)}{{{r}_{{{j}^{'}}i}}}(0)}{{{P}_{i}}\left( 1 \right)\prod\limits_{{{j}^{'}}\in C(i)}{{{r}_{{{j}^{'}}i}}}(1)}=L({{P}_{i}})+\sum\limits_{{{j}^{'}}\in C(i)}{{{r}_{{{j}^{'}}i}}}
qij=lnPi(1)j′∈C(i)∏rj′i(1)Pi(0)j′∈C(i)∏rj′i(0)=L(Pi)+j′∈C(i)∑rj′i
故后验概率大于0,将
v
i
v_i
vi判决为0,反之判决为1。(对数值大于0,说明真数大于1,及分子对应的
v
i
v_i
vi为0的概率大于分母对应
v
i
v_i
vi为1的概率,故将
v
i
v_i
vi判决为0)
2.3.4最小和译码算法(Min-Sum BP)
对数域上的BP算法大大减少了乘法的运算次数,但是引入了
t
a
n
h
tanh
tanh函数,在硬件实现时,非线性运算需要消耗较多的硬件资源,从实现的角度会采用查表方式,性能优异的LDPC码通常码长较长,计算量会随码长的增加而增大,查表的引入让电路变得复杂,带来较高的译码延迟和译码功耗。
为了降低计算复杂度,简化BP译码算法,提出了最小和译码算法,最小和算法是对校验节点传递给变量节点外信息的改进,将复杂的
t
a
n
h
tanh
tanh函数和
t
a
n
h
−
1
tanh^{-1}
tanh−1函数转化为基本的符号值计算和数值比较运算,大大降低了算法的复杂度。
注意到
t
a
n
h
tanh
tanh函数和
t
a
n
h
−
1
tanh^{-1}
tanh−1函数是奇函数,满足:
t
a
n
h
(
x
)
=
s
g
n
(
x
)
⋅
t
a
n
h
(
∣
x
∣
)
tanh(x)=sgn(x)\cdot tanh(\left |x\right |)
tanh(x)=sgn(x)⋅tanh(∣x∣)
t
a
n
h
−
1
(
x
)
=
s
g
n
(
x
)
⋅
t
a
n
h
−
1
(
∣
x
∣
)
tanh^{-1}(x)=sgn(x)\cdot tanh^{-1}(|x|)
tanh−1(x)=sgn(x)⋅tanh−1(∣x∣)
t
a
n
h
(
∣
x
∣
)
tanh(|x|)
tanh(∣x∣)函数的图像如下图所示,可以看出
t
a
n
h
(
∣
x
∣
)
tanh(|x|)
tanh(∣x∣)为偶函数,在
[
0
,
1
]
[0,1]
[0,1]上单调递增。
故校验节点传递给变量节点的外信息可改写为:
r
j
i
=
2
tanh
−
1
(
∏
i
′
∈
V
(
j
)
\
i
tanh
q
i
′
j
2
)
=
2
tanh
−
1
(
∏
i
′
∈
V
(
j
)
\
i
(
s
g
n
(
q
i
′
j
2
)
⋅
tanh
(
∣
q
i
′
j
2
∣
)
)
)
=
2
∏
i
′
∈
V
(
j
)
\
i
s
g
n
(
q
i
′
j
2
)
⋅
tanh
-
1
(
∏
i
′
∈
V
(
j
)
\
i
tanh
(
∣
q
i
′
j
2
∣
)
)
{{r}_{ji}}=2{{\tanh }^{-1}}(\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{\tanh \frac{{{q}_{{{i}^{'}}j}}}{2}})=2{{\tanh }^{-1}}(\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{(sgn(\frac{{{q}_{{{i}^{'}}j}}}{2})\cdot \tanh (\left| \frac{{{q}_{{{i}^{'}}j}}}{2} \right|))})=2\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{sgn(\frac{{{q}_{{{i}^{'}}j}}}{2})}\cdot {{\tanh }^{\text{-}1}}(\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{\tanh (\left| \frac{{{q}_{{{i}^{'}}j}}}{2} \right|)})
rji=2tanh−1(i′∈V(j)\i∏tanh2qi′j)=2tanh−1(i′∈V(j)\i∏(sgn(2qi′j)⋅tanh(
2qi′j
)))=2i′∈V(j)\i∏sgn(2qi′j)⋅tanh-1(i′∈V(j)\i∏tanh(
2qi′j
))
又:
2
tanh
-
1
(
∏
i
′
∈
V
(
j
)
\
i
tanh
(
∣
q
i
′
j
2
∣
)
)
≈
2
tanh
−
1
(
min
i
′
∈
V
(
j
)
\
i
(
tanh
(
∣
q
i
′
j
2
∣
)
)
)
=
2
tanh
−
1
(
tanh
(
min
i
′
∈
V
(
j
)
\
i
(
∣
q
i
′
j
2
∣
)
)
)
=
min
i
′
∈
V
(
j
)
\
i
(
∣
q
i
′
j
∣
)
2{{\tanh }^{\text{-}1}}(\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{\tanh (\left| \frac{{{q}_{{{i}^{'}}j}}}{2} \right|)})\approx 2{{\tanh }^{-1}}(\underset{{{i}^{'}}\in V(j)\backslash i}{\mathop{\min }}\,(\tanh (\left| \frac{{{q}_{{{i}^{'}}j}}}{2} \right|)))=2{{\tanh }^{-1}}(\tanh (\underset{{{i}^{'}}\in V(j)\backslash i}{\mathop{\min }}\,(\left| \frac{{{q}_{{{i}^{'}}j}}}{2} \right|)))=\underset{{{i}^{'}}\in V(j)\backslash i}{\mathop{\min }}\,(\left| {{q}_{{{i}^{'}}j}} \right|)
2tanh-1(i′∈V(j)\i∏tanh(
2qi′j
))≈2tanh−1(i′∈V(j)\imin(tanh(
2qi′j
)))=2tanh−1(tanh(i′∈V(j)\imin(
2qi′j
)))=i′∈V(j)\imin(
qi′j
)
故校验节点传递给变量节点的外信息最终可化简为:
r
j
i
=
∏
i
′
∈
V
(
j
)
\
i
s
g
n
(
q
i
′
j
2
)
⋅
min
i
′
∈
V
(
j
)
\
i
(
∣
q
i
′
j
∣
)
=
∏
i
′
∈
V
(
j
)
\
i
s
g
n
(
q
i
′
j
)
⋅
min
i
′
∈
V
(
j
)
\
i
(
∣
q
i
′
j
∣
)
{{r}_{ji}}=\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{ sgn(\frac{{{q}_{{{i}^{'}}j}}}{2})}\cdot \underset{{{i}^{'}}\in V(j)\backslash i}{\mathop{\min }}\,(\left| {{q}_{{{i}^{'}}j}} \right|)=\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{sgn({{q}_{{{i}^{'}}j}})}\cdot \underset{{{i}^{'}}\in V(j)\backslash i}{\mathop{\min }}\,(\left| {{q}_{{{i}^{'}}j}} \right|)
rji=i′∈V(j)\i∏sgn(2qi′j)⋅i′∈V(j)\imin(
qi′j
)=i′∈V(j)\i∏sgn(qi′j)⋅i′∈V(j)\imin(
qi′j
)
在对数域BP算法中,变量节点消息初始化时,需要对信道参数进行估计,才能获得参数
δ
2
\delta^{2}
δ2,但在最小和算法中,数值计算转变成变量节点概率绝对值的比较运算,且信道参数是其共同的系数,可以省略,不影响整个译码过程。在具体实现时,不需要再额外添加信道参数估计模块,进一步简化了电路,提高了译码的时效性。
由图像可知,
t
a
n
h
(
x
)
tanh(x)
tanh(x)函数小于1的,相乘之后的结果肯定是越来越小,所以只用最小值来代替连乘的结果,但这样最小值肯定是大于连乘之后的结果的,这时需要引入一个修正因子来补偿译码损失的性能。进而引出了归一化最小和译码算法和偏移最小和译码算法,通过引入一个小于1的乘性因子或者减去一个值来补偿损失的译码性能。
2.3.5改进的最小和译码算法(NMSA and OMSA)
最小和译码算法将LLR BP的非线性乘法运算简化为符号乘法和最小值运算,这种简化运算带来的精度的损失,影响译码性能。注意到:
2
tanh
-
1
(
∏
i
′
∈
V
(
j
)
\
i
tanh
(
∣
q
i
′
j
2
∣
)
)
<
2
tanh
−
1
(
min
i
′
∈
V
(
j
)
\
i
(
tanh
(
∣
q
i
′
j
2
∣
)
)
)
=
2
tanh
−
1
(
tanh
(
min
i
′
∈
V
(
j
)
\
i
(
∣
q
i
′
j
2
∣
)
)
)
=
min
i
′
∈
V
(
j
)
\
i
(
∣
q
i
′
j
∣
)
2{{\tanh }^{\text{-}1}}(\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{\tanh (\left| \frac{{{q}_{{{i}^{'}}j}}}{2} \right|)})< 2{{\tanh }^{-1}}(\underset{{{i}^{'}}\in V(j)\backslash i}{\mathop{\min }}\,(\tanh (\left| \frac{{{q}_{{{i}^{'}}j}}}{2} \right|)))=2{{\tanh }^{-1}}(\tanh (\underset{{{i}^{'}}\in V(j)\backslash i}{\mathop{\min }}\,(\left| \frac{{{q}_{{{i}^{'}}j}}}{2} \right|)))=\underset{{{i}^{'}}\in V(j)\backslash i}{\mathop{\min }}\,(\left| {{q}_{{{i}^{'}}j}} \right|)
2tanh-1(i′∈V(j)\i∏tanh(
2qi′j
))<2tanh−1(i′∈V(j)\imin(tanh(
2qi′j
)))=2tanh−1(tanh(i′∈V(j)\imin(
2qi′j
)))=i′∈V(j)\imin(
qi′j
)
即Min-Sum算法的校验节点更新值要大于LLR BP。为平衡两种译码算法校验节点更新值幅度上的差异,两种改进最小和译码被提出,归一化最小和算法NMSA(Normalized Min-Sum Algorithm)和偏移最小和算法OMSA(Offoset Min-Sum Algorithm)算法,即乘一个小于1的常数
α
\alpha
α或者减去一个正值
β
\beta
β。
NMSA:
r
j
i
=
α
⋅
∏
i
′
∈
V
(
j
)
\
i
s
g
n
(
q
i
′
j
)
⋅
min
i
′
∈
V
(
j
)
\
i
(
∣
q
i
′
j
∣
)
{{r}_{ji}}=\alpha\cdot\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{sgn({{q}_{{{i}^{'}}j}})}\cdot \underset{{{i}^{'}}\in V(j)\backslash i}{\mathop{\min }}\,(\left| {{q}_{{{i}^{'}}j}} \right|)
rji=α⋅i′∈V(j)\i∏sgn(qi′j)⋅i′∈V(j)\imin(
qi′j
)
α
\alpha
α是一个乘性缩小因子,也叫归一化因子,
0
≤
α
≤
1
0\le\alpha\le1
0≤α≤1。在实际应用中,
α
\alpha
α通常在0.6~0.9之间选取一个固定值。NMSA虽然多了一次乘法运算,对计算复杂度影响不大,但是却可以很好的改善译码性能。
OMSA
r
j
i
=
m
a
x
(
min
i
′
∈
V
(
j
)
\
i
(
∣
q
i
′
j
∣
)
−
β
,
0
)
⋅
∏
i
′
∈
V
(
j
)
\
i
s
g
n
(
q
i
′
j
)
r_{ji}=max(\underset{{{i}^{'}}\in V(j)\backslash i}{\mathop{\min }}\,(\left| {{q}_{{{i}^{'}}j}}\right|)-\beta,0)\cdot\prod\limits_{{{i}^{'}}\in V(j)\backslash i}{sgn({{q}_{{{i}^{'}}j}})}
rji=max(i′∈V(j)\imin(
qi′j
)−β,0)⋅i′∈V(j)\i∏sgn(qi′j)
β
\beta
β为偏移因子,其值大于0。同归一化因子
α
\alpha
α一样,最佳
β
\beta
β值的选取受到LDPC码码长、码率、信噪比以及迭代次数的影响,但在实际应用时
β
\beta
β通常为常数。
由此可以看到,NMSA算法和OMSA算法尽管处理方式不一样,但核心思想都是补偿校验节点信息过大估计的问题。在实际应用中,通过仿真确定最佳修正因子
α
\alpha
α和
β
\beta
β,对于系统性能的提升是至关重要的。
三、LDPC译码算法的matlab实现
3.1BPSK调制体制下关于SNR和 E b / N 0 E_b/N_0 Eb/N0与 δ \delta δ之间的关系
(1)SNR在连续和离散系统中计算
假设连续时间高斯信道:Signal Power = P;Noise PSD(噪声功率密度)=
N
0
/
2
N_0/2
N0/2(单边谱密度为
N
0
N_0
N0,双边谱密度为
N
0
/
2
N_0/2
N0/2);Bandwith = 2W。
可以推出:
N
o
i
s
e
P
o
w
e
r
=
N
o
i
s
e
P
S
D
×
B
a
n
d
w
i
d
t
h
=
N
0
W
Noise Power = Noise PSD\times Bandwidth = N_0W
NoisePower=NoisePSD×Bandwidth=N0W
S
N
R
=
P
/
N
0
W
SNR = P/N_0W
SNR=P/N0W
假设离散时间信道:每个符号的能量
E
s
=
P
T
=
P
/
2
W
E_s = PT = P/2W
Es=PT=P/2W(信号的能量乘以其持续时间);噪声方差:
δ
2
\delta^2
δ2。
S
N
R
=
E
S
/
δ
2
=
P
/
2
W
δ
2
SNR = E_S/\delta^2 = P/2W\delta^2
SNR=ES/δ2=P/2Wδ2
(2)计算BPSK的
S
N
R
SNR
SNR
对于BPSK有:
E
s
=
[
(
−
1
)
2
+
(
+
1
)
2
]
/
2
=
1
E_s = [(-1)^2+(+1)^2]/2 = 1
Es=[(−1)2+(+1)2]/2=1,信道的高斯噪声:Noise Power =
δ
2
\delta^2
δ2。
所以
S
N
R
=
1
/
δ
2
SNR = 1/\delta^2
SNR=1/δ2。
编码后每个bit的能量:
E
b
=
E
s
/
R
=
n
E
s
/
k
E_b = E_s/R = nE_s/k
Eb=Es/R=nEs/k,
R
=
k
/
n
R = k/n
R=k/n(码率=待编码的bit数/编码后的bit数)。能量一定需要增加,因为编码之后的信息增加了冗余,相当于多个bit只代表一个bit的信息。
所以可以推出
E
s
=
R
⋅
E
b
E_s = R\cdot E_b
Es=R⋅Eb。
又因为
N
0
/
2
=
δ
2
N_0/2 = \delta^2
N0/2=δ2,所以可以推出
S
N
R
=
E
s
/
δ
2
=
R
E
b
/
(
N
0
/
2
)
=
2
R
(
E
b
/
N
0
)
SNR = E_s/\delta^2 = RE_b/(N_0/2) = 2R(E_b/N_0)
SNR=Es/δ2=REb/(N0/2)=2R(Eb/N0)。
故
E
b
/
N
0
=
S
N
R
/
2
R
E_b/N_0 = SNR/2R
Eb/N0=SNR/2R,对于BPSK调制来说:
E
b
/
N
0
=
1
/
(
2
R
δ
2
)
E_b/N_0 = 1/(2R\delta^2)
Eb/N0=1/(2Rδ2)。
3.2AWGN信道下的初始化
下面介绍一下常用的AWGN信道下,BPSK调制的LDPC码译码的消息初始化。2
在通信系统中,采用二进制数字调制,码字
c
i
c_i
ci映射为符号
x
i
=
(
−
1
)
c
i
,
i
=
1
,
2
,
.
.
.
,
n
x_i=(-1)^{c_i},i=1,2,...\ ,n
xi=(−1)ci,i=1,2,... ,n,接收符号
y
i
=
x
i
+
n
i
y_i = x_i+n_i
yi=xi+ni,各个
n
i
n_i
ni为统计独立同分布的高斯随机变量,双边功率谱密度为
N
0
/
2
N_0/2
N0/2,可知
y
i
y_i
yi是均值为1,方差为
δ
2
\delta^2
δ2的高斯变量。在信源等概率分布时,
p
(
x
i
=
+
1
)
=
p
(
x
i
=
−
1
)
=
1
2
p(x_i=+1)=p(x_i=-1)=\frac{1}{2}
p(xi=+1)=p(xi=−1)=21,此时概率BP译码的初始消息为:
q
i
j
(
0
)
(
1
)
=
p
(
c
i
=
1
∣
y
i
)
=
p
(
x
i
=
−
1
∣
y
i
)
=
1
1
+
e
2
y
i
/
δ
2
q_{ij}^{(0)}(1) = p(c_i = 1|y_i) = p(x_i = -1|y_i) = \frac{1}{1+e^{{2y_{i}}/{\delta^{2}}}}
qij(0)(1)=p(ci=1∣yi)=p(xi=−1∣yi)=1+e2yi/δ21
q
i
j
(
0
)
(
0
)
=
p
(
c
i
=
0
∣
y
i
)
=
p
(
x
i
=
+
1
∣
y
i
)
=
1
1
+
e
−
2
y
i
/
δ
2
q_{ij}^{(0)}(0) = p(c_i = 0|y_i) = p(x_i = +1|y_i) = \frac{1}{1+e^{{-2y_{i}}/{\delta^{2}}}}
qij(0)(0)=p(ci=0∣yi)=p(xi=+1∣yi)=1+e−2yi/δ21
证明:
在AWGN信道中采用BPSK调制,当功率谱密度为
N
0
/
2
=
δ
2
N_0/2=\delta^2
N0/2=δ2,条件概率分布函数为:
p
(
y
i
∣
x
i
=
+
1
)
=
1
2
π
δ
e
−
(
y
i
−
1
)
2
2
δ
2
p(y_i|x_i=+1)=\frac{1}{\sqrt{2\pi}\delta}e^{-\frac{(y_i-1)^2}{2\delta^2}}
p(yi∣xi=+1)=2πδ1e−2δ2(yi−1)2
p
(
y
i
∣
x
i
=
−
1
)
=
1
2
π
δ
e
−
(
y
i
+
1
)
2
2
δ
2
p(y_i|x_i=-1)=\frac{1}{\sqrt{2\pi}\delta}e^{-\frac{(y_i+1)^2}{2\delta^2}}
p(yi∣xi=−1)=2πδ1e−2δ2(yi+1)2
由贝叶斯公式可得:
p
(
x
i
=
+
1
∣
y
i
)
=
p
(
x
i
=
+
1
,
y
i
)
p
(
y
i
)
=
p
(
y
i
∣
x
i
=
+
1
)
p
(
x
i
=
+
1
)
p
(
y
i
)
=
p
(
y
i
∣
x
i
=
+
1
)
p
(
x
i
=
+
1
)
p
(
y
i
∣
x
i
=
+
1
)
p
(
x
i
=
+
1
)
+
p
(
y
i
∣
x
i
=
−
1
)
p
(
x
i
=
−
1
)
p(x_i = +1|y_i) = \frac{p(x_i=+1,y_i)}{p(y_i)}=\frac{p(y_i|x_i=+1)p(x_i=+1)}{p(y_i)}=\frac{p(y_i|x_i=+1)p(x_i=+1)}{p(y_i|x_i=+1)p(x_i=+1)+p(y_i|x_i=-1)p(x_i=-1)}
p(xi=+1∣yi)=p(yi)p(xi=+1,yi)=p(yi)p(yi∣xi=+1)p(xi=+1)=p(yi∣xi=+1)p(xi=+1)+p(yi∣xi=−1)p(xi=−1)p(yi∣xi=+1)p(xi=+1)
因为
p
(
x
i
=
+
1
)
=
p
(
x
i
=
−
1
)
=
1
2
p(x_i=+1)=p(x_i=-1)=\frac{1}{2}
p(xi=+1)=p(xi=−1)=21,所以有:
p
(
x
i
=
+
1
∣
y
i
)
=
1
2
π
δ
e
−
(
y
i
−
1
)
2
2
δ
2
1
2
π
δ
e
−
(
y
i
−
1
)
2
2
δ
2
+
1
2
π
δ
e
−
(
y
i
+
1
)
2
2
δ
2
=
1
1
+
e
−
2
y
i
δ
2
p(x_i=+1|y_i)=\frac{\frac{1}{\sqrt{2\pi}\delta}e^{-\frac{(y_i-1)^2}{2\delta^2}}}{\frac{1}{\sqrt{2\pi}\delta}e^{-\frac{(y_i-1)^2}{2\delta^2}}+\frac{1}{\sqrt{2\pi}\delta}e^{-\frac{(y_i+1)^2}{2\delta^2}}}=\frac{1}{1+e^{\frac{-2y_i}{\delta^2}}}
p(xi=+1∣yi)=2πδ1e−2δ2(yi−1)2+2πδ1e−2δ2(yi+1)22πδ1e−2δ2(yi−1)2=1+eδ2−2yi1
p
(
x
i
=
−
1
∣
y
i
)
p(x_i=-1|y_i)
p(xi=−1∣yi)同理。
由此,LLR BP译码的初始消息为:
L
(
P
i
)
=
ln
P
i
(
0
)
P
i
(
1
)
=
ln
p
(
v
i
=
0
∣
y
i
)
p
(
v
i
=
1
∣
y
i
)
=
ln
(
1
+
e
2
y
δ
2
1
+
e
−
2
y
δ
2
)
L({{P}_{i}})=\ln \frac{{{P}_{i}}(0)}{{{P}_{i}}(1)}=\ln \frac{p({{v}_{i}}=0|{{y}_{i}})}{p({{v}_{i}}=1|{{y}_{i}})}=\ln(\frac{1+e^{\frac{2y}{\delta^2}}}{1+e^{\frac{-2y}{\delta^2}}})
L(Pi)=lnPi(1)Pi(0)=lnp(vi=1∣yi)p(vi=0∣yi)=ln(1+eδ2−2y1+eδ22y)
=
l
n
(
e
y
δ
2
(
e
y
δ
2
+
e
−
y
δ
2
)
e
−
y
δ
2
(
e
y
δ
2
+
e
−
y
δ
2
)
)
=
l
n
(
e
2
y
δ
2
)
=
2
y
δ
2
=ln(\frac{e^{\frac{y}{\delta^2}}(e^{\frac{y}{\delta^2}}+e^{\frac{-y}{\delta^2}})}{e^{\frac{-y}{\delta^2}}(e^{\frac{y}{\delta^2}}+e^{\frac{-y}{\delta^2}})})=ln(e^{\frac{2y}{\delta^2}})=\frac{2y}{\delta^2}
=ln(eδ2−y(eδ2y+eδ2−y)eδ2y(eδ2y+eδ2−y))=ln(eδ22y)=δ22y
进行Monte Carlo仿真时,假设信道传送符号的平均能量为
E
b
E_b
Eb,信道编码码率为
r
r
r,采用匹配滤波器的最佳接收,解调器输入的信噪比
E
b
/
N
0
E_b/N_0
Eb/N0与噪声方差的关系为:
δ
2
=
1
2
r
(
E
b
/
N
0
)
\delta^2=\frac{1}{2r(E_b/N_0)}
δ2=2r(Eb/N0)1
3.3基于归一化最小和译码算法(Normalized min-sum algorithm,NMSA)的matlab实现
该matlab代码主要参考了博主cea_wind的博客3。
function [ iter,decoderData ] = ldpcdecoderminsum(H,HRowNum,HColNum,receiveSignal,MAX_ITER_NUM,NORM_FACTOR)
%LDPCC decode algorithm , based on the Min-Sum algorithm
% H: check matrix
% HRowNum,HColNum: index generate from check matrix
% receiveSignal: received soft information from channel
% MAT_INTER_NUM: maximum iterations
% HRowNum,HColNum generating by the following codes
% [r_mark,c_mark] = find(H~=0);
% HColNum = sum(H);
% HRowNum = cell(1,size(H,1));
% for rowH = 1:size(H,1)
% HRowNum{rowH} = find(r_mark==rowH);
% end
[~,N] = size(H);
vl = receiveSignal;
decoderData = zeros(1,N);
uml = zeros(1,sum(HColNum));
vml = uml;
ColStart = 1;
for L=1:length(HColNum)
vml(ColStart:ColStart+HColNum(L)-1) = vl(L);
ColStart = ColStart+HColNum(L);
end
for iter=1:MAX_ITER_NUM
%check nodes information process
for L_r=1:length(HRowNum)
L_col = HRowNum{L_r};
vmltemp = vml(L_col);
vmlMark = ones(size(vmltemp));
vmlMark(vmltemp<0) = -1;
vmlMark = prod(vmlMark);
minvml = sort(abs(vmltemp));
for L_col_i = 1:length(L_col)
if minvml(1)==abs(vmltemp(L_col_i))
if vmltemp(L_col_i)<0
vmltemp(L_col_i) = -vmlMark*minvml(2);
else
vmltemp(L_col_i) = vmlMark*minvml(2);
end
else
if vmltemp(L_col_i)<0
vmltemp(L_col_i) = -vmlMark*minvml(1);
else
vmltemp(L_col_i) = vmlMark*minvml(1);
end
end
end
uml(L_col) = NORM_FACTOR*vmltemp;
end
%variable nodes information process
ColStart = 1;
qn0_1 = ones(1,N);
for L=1:length(HColNum)
umltemp = uml(ColStart:ColStart+HColNum(L)-1);
temp = sum(umltemp);
qn0_1(L) = temp + vl(L);
umltemp = temp - umltemp;
vml(ColStart:ColStart+HColNum(L)-1) = umltemp + vl(L);
ColStart = ColStart+HColNum(L);
end
% decision decoding
decoderData(qn0_1>=0) = 0;
decoderData(qn0_1<0) = 1;
if(mod(H*decoderData',2)==0)
break;
end
end
Monte Carlo仿真matlab代码:(本博文matlab代码在cea_wind的代码基础上有所修改)。博主cea_wind将他的matlab代码上传到了GitHub上,读者可以通过文末的超链接自行下载学习4。
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%.M file for the simulation of BER performence.
%Create file date:2022-3-11 10:00;
%Designer:;
%H:parity check matrics which has M rows and N columns
%Qvc:
%Rcv:
%Qv:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%clear everything;
clear all;
close all;
clc;
tic;
%set the simulation parameter
EbN0_dB = 1:0.2:4;
Frames_Num =2000;
MAX_Iter_Num =5;
%% NMSA:'r-'
bitError_NMSA = zeros(1,length(EbN0_dB));
BER_NMSA = zeros(1,length(EbN0_dB));
iterNumTotal_NMSA = zeros(1,length(EbN0_dB));
INFO_LENGTH = 4096;
RATE = 4/5;
SIZE_M = 512;
NORM_FACTOR = 0.75;
H = ccsdscheckmatrix(SIZE_M,RATE);
G = ccsdsgeneratematrix(H,SIZE_M,RATE);
% %[H]=genH(128,256);
[M,N] = size(H);
[r_mark,c_mark] = find(H~=0);
HColNum = sum(H);
HRowNum = cell(1,size(H,1));
for rowH = 1:size(H,1)
HRowNum{rowH} = find(r_mark==rowH);
end
for nEbN0 = 1:length(EbN0_dB)
fprintf('%2.2f',EbN0_dB(nEbN0));
fprintf('dB,');
fprintf('m = %d,codenum = ',MAX_Iter_Num);
SNR_per_bit = 10^(EbN0_dB(nEbN0)/10);
N0 = 1/(SNR_per_bit*RATE);
sigma = sqrt(N0/2);
for nF = 1:Frames_Num
fprintf('%d,',nF);
%encode
message = randi([0 1],1,INFO_LENGTH);
encodeData = mod(message*G,2);
transmitSignal = 1 - 2 * encodeData;
%AWGN Channel
receiveSignal = transmitSignal + sigma*randn(1,length(transmitSignal));
%%decode
[iter_Num_NMSA,recoverData_NMSA] =ldpcdecoderminsum(H,HRowNum,HColNum,receiveSignal,MAX_Iter_Num,NORM_FACTOR);
%biterror,BER,iterNumToal caculation;
bitError_NMSA(nEbN0) = bitError_NMSA(nEbN0) + sum(abs(message - recoverData_NMSA(1:length(message))));
iterNumTotal_NMSA(nEbN0) = iterNumTotal_NMSA(nEbN0) + iter_Num_NMSA;
if (nF == Frames_Num)
BER_NMSA(nEbN0)= bitError_NMSA(nEbN0)/(nF*length(message));
break;
end
end %end for 'nF'
fprintf('\n');
end % end for 'nEbN0'
semilogy(EbN0_dB,BER_NMSA,'*r-');
xlabel('Eb/N0(dB)'); ylabel('BER');
grid on;
toc;
下图所展示的为matlab仿真误结果:(总帧数为2000帧,为获得更精确误码率曲线,读者可自行修改参与仿真总帧数,并可尝试不等间隔
E
b
/
N
0
E_b/N_0
Eb/N0)
误码率曲线:
3.4分层归一化最小和LDPC译码算法
从BP和NMSA译码算法变量节点更新过程可以看出,总是需要更新完所有校验节点再更新所有变量节点,这种更新策略称为泛洪(Flooding)调度机制,使得本次迭代过程中产生的新信息只能在下次迭代中使用。5
LDPC码译码算法的收敛速度也会影响译码复杂度,收敛速度越快,所需迭代次数越少,而迭代次数的减少既可以降低译码延时,又可以降低总计算量。6
E.Sharon,J.Zhang等人提出了LBP7 和SBP8 静态策略串行译码算法,该类算法不同于传统的BP算法的泛洪调度机制,而是采用固定的顺序对节点消息进行更新,每次迭代中,校验节点的更新可利用本次迭代已更新的信息,加快了译码收敛速度。
3.4.1LBP译码算法
LBP译码算法以校验节点为单位,按校验矩阵行的顺序进行节点更新,其算法流程如下:(该小节图片取自参考文献6),结合3.4.2小节的matlab代码会加深对Flooding调度和分层串行调度算法的理解。
LBP译码算法消息传递过程如下图所示:
3.4.2 分层最小和译码算法的matlab实现
function [n,recoverData] = LBP(receiveSignal, H,MAX_Iter_Num,NORM_FACTOR)
[M,N]=size(H);
Qv = receiveSignal;
Rcv = zeros(M,N);
Qvc = zeros(1,N);
recoverData = zeros(1,N);
for n = 1:MAX_Iter_Num
for i = 1:M
col = find(H(i,:));
for k = 1:length(col)
Qvc(1,col(k)) = Qv(col(k)) - Rcv(i,col(k));
end
alpha = sign(Qvc);
beta = abs(Qvc);
signS = 1;
min1 = 100000;
min2 = 100000;
index = 10000;
for k = 1:length(col)
signS = alpha(col(k))*signS;
if beta(col(k)) < min1
min2 = min1;
min1 = beta(col(k));
index = col(k);
end
if ((beta(col(k)) > min1) && (beta(col(k))<min2))
min2 = beta(col(k));
end
end
for k = 1:length(col) %error 0315 k = 1:length(col(k))
if col(k) == index
Rcv(i,col(k)) = NORM_FACTOR*signS*alpha(col(k))*min2;
else
Rcv(i,col(k)) = NORM_FACTOR*signS*alpha(col(k))*min1;
end
Qv(col(k)) = Qvc(1,col(k)) + Rcv(i,col(k));
end
end %end for i
for k = 1:N
if Qv(k) < 0
recoverData(k) = 1;
else
recoverData(k) = 0;
end
end
if rem(H*recoverData',2) == 0
break;
end
end %end for n
Monte Carlo仿真matlab代码:(代码中ccsdscheckmatrix函数和ccsdsgeneratematrix函数为博主cea_wind在GitHub上传的代码,读者可以通过文末的超链接自行下载)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%.M file for the simulation of BER performence.
%Create file date:2022-3-11 10:00;
%Designer:;
%H:parity check matrics which has M rows and N columns
%Qvc:
%Rcv:
%Qv:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%clear everything;
clear all;
close all;
clc;
tic;
%set the simulation parameter
EbN0_dB = 1:0.2:4;
Frames_Num =2000;
MAX_Iter_Num =5;
%% NMSA:'*r-'
bitError_NMSA = zeros(1,length(EbN0_dB));
BER_NMSA = zeros(1,length(EbN0_dB));
iterNumTotal_NMSA = zeros(1,length(EbN0_dB));
%%LBP:'+b-'
bitError_LBP = zeros(1,length(EbN0_dB));
BER_LBP = zeros(1,length(EbN0_dB));
iterNumTotal_LBP = zeros(1,length(EbN0_dB));
INFO_LENGTH = 4096;
RATE = 4/5;
SIZE_M = 512;
NORM_FACTOR = 0.75;
H = ccsdscheckmatrix(SIZE_M,RATE);
G = ccsdsgeneratematrix(H,SIZE_M,RATE);
% %[H]=genH(128,256);
[M,N] = size(H);
[r_mark,c_mark] = find(H~=0);
HColNum = sum(H);
HRowNum = cell(1,size(H,1));
for rowH = 1:size(H,1)
HRowNum{rowH} = find(r_mark==rowH);
end
for nEbN0 = 1:length(EbN0_dB)
fprintf('%2.2f',EbN0_dB(nEbN0));
fprintf('dB,');
fprintf('m = %d,codenum = ',MAX_Iter_Num);
SNR_per_bit = 10^(EbN0_dB(nEbN0)/10);
N0 = 1/(SNR_per_bit*RATE);
sigma = sqrt(N0/2);
for nF = 1:Frames_Num
fprintf('%d,',nF);
%encode
message = randi([0 1],1,INFO_LENGTH);
encodeData = mod(message*G,2);
transmitSignal = 1 - 2 * encodeData;
%AWGN Channel
receiveSignal = transmitSignal + sigma*randn(1,length(transmitSignal));
%%decode
[iter_Num_NMSA,recoverData_NMSA] =ldpcdecoderminsum(H,HRowNum,HColNum,receiveSignal,MAX_Iter_Num,NORM_FACTOR);
[iter_Num_LBP,recoverData_LBP] =LBP(receiveSignal,H,MAX_Iter_Num,NORM_FACTOR);
%biterror,BER,iterNumToal caculation;
bitError_NMSA(nEbN0) = bitError_NMSA(nEbN0) + sum(abs(message - recoverData_NMSA(1:length(message))));
iterNumTotal_NMSA(nEbN0) = iterNumTotal_NMSA(nEbN0) + iter_Num_NMSA;
bitError_LBP(nEbN0) = bitError_LBP(nEbN0) + sum(abs(message - recoverData_LBP(1:length(message))));
iterNumTotal_LBP(nEbN0) = iterNumTotal_LBP(nEbN0) + iter_Num_LBP;
if (nF == Frames_Num)
BER_NMSA(nEbN0)= bitError_NMSA(nEbN0)/(nF*length(message));
BER_LBP(nEbN0)= bitError_LBP(nEbN0)/(nF*length(message));
break;
end
end %end for 'nF'
fprintf('\n');
end % end for 'nEbN0'
semilogy(EbN0_dB,BER_NMSA,'*r-',EbN0_dB,BER_LBP,'+b-');
xlabel('Eb/N0(dB)'); ylabel('BER');
grid on;
toc;
3.4.3 LBP译码算法和NMSA译码算法收敛速度对比
仿真参数:
%set the simulation parameter
EbN0_dB = 1:0.2:4;
Frames_Num =2000;
MAX_Iter_Num =5;
仿真结果
可以看出LBP具有更快的译码收敛速度。
四、LDPC译码的调度算法
王冰冰. 空间通信中LDPC译码算法研究与译码器设计[D].中国科学院大学(中国科学院国家空间科学中心),2021.DOI:10.27562/d.cnki.gkyyz.2021.000036. ↩︎
康婧. 星地高速数传LDPC码编译码算法及高效实现技术研究[D].中国科学院大学(中国科学院国家空间科学中心),2021.DOI:10.27562/d.cnki.gkyyz.2021.000010. ↩︎
E. Sharon, S. Litsyn, J. Goldberger. An Efficient Message-Passing Schedule for LDPC Decoding[C]//23rd IEEE Convertion of Electrical and Electronics Engineer, Tel-Aviv, Israel, 2004: 223-226. ↩︎
J. Zhang, M. Fossorier. Shuffled Iterative Decoding[J]. IEEE Transactions on Communications, 2005, 53(2): 209-213. ↩︎