Distributed Source Coding

Reference:

Elements of Information Theory, 2nd Edition

Slides of EE4560, TUD

Introduction

  • We know how to encode a source X X X. A rate R ≥ H ( X ) R\ge H(X) RH(X) is sufficient.

  • If there are two sources ( X , Y ) (X,Y) (X,Y), a rate R ≥ H ( X , Y ) R\ge H(X,Y) RH(X,Y) is sufficient.

在这里插入图片描述

  • But what if the X X X and Y Y Y sources must be described separately for some user who wishes to reconstruct both X X X and Y Y Y?

  • Clearly, by separately encoding X X X and Y Y Y, it is seen that a rate R = R x + R y ≥ H ( X ) + H ( Y ) R=R_x+R_y\ge H(X)+H(Y) R=Rx+RyH(X)+H(Y) is sufficient.

  • However, in a surprising and fundamental paper by Slepian and Wolf, it is shown that a total rate R = H ( X , Y ) R=H(X,Y) R=H(X,Y) is sufficient even for separate encoding of correlated sources.

  • Intuitively, since H ( X , Y ) = H ( X ) + H ( Y ∣ X ) H(X,Y)=H(X)+H(Y|X) H(X,Y)=H(X)+H(YX), we can first encode source X X X at a rate R 1 ≥ H ( X ) R_1\ge H(X) R1H(X) after which we encode source Y Y Y, given X X X, at a rate R 2 ≥ H ( Y ∣ X ) R_2\ge H(Y|X) R2H(YX).

    More specifically,

    • Using n H ( X ) n H(X) nH(X) bits we can encode X n X^{n} Xn efficiently, so that the decoder can reconstruct X n X^{n} Xn with arbitrarily low probability of error
    • Associated with every x n x^{n} xn is a typical “fan” of y n y^{n} yn sequences that are jointly typical with the given x n , 2 n H ( Y ∣ X ) x^{n}, 2^{n H(Y | X)} xn,2nH(YX) in total
    • The encoder can send the index of the y n y^{n} yn within this typical fan for which he needs n H ( Y ∣ X ) n H(Y | X) nH(YX) bits
    • The decoder, also knowing x n x^{n} xn, can then construct the typical fan and hence reconstruct y n y^{n} yn

在这里插入图片描述

The graph of the whole process can be presented as

在这里插入图片描述

  • But what if the Y Y Y encoder does not know which sequence x n x^n xn is encoded?

在这里插入图片描述

Slepian-Wolf Coding

Let ( X 1 , Y 1 ) , ( X 2 , Y 2 ) , … \left(X_{1}, Y_{1}\right),\left(X_{2}, Y_{2}\right), \ldots (X1,Y1),(X2,Y2), be a sequence of jointly distributed random variables i.i.d. ∼ p ( x , y ) \sim p(x, y) p(x,y)

Definition 1 (Distributed source code):

A ( ( 2 n R 1 , 2 n R 2 ) , n ) \left(\left(2^{n R_{1}}, 2^{n R_{2}}\right), n\right) ((2nR1,2nR2),n) distributed source code for the joint sources ( X , Y ) (X, Y) (X,Y) consists of two encoder maps
f 1 : X n → { 1 , 2 , … , 2 n R 1 } f 2 : Y n → { 1 , 2 , … , 2 n R 2 } \begin{array}{l} f_{1}: \mathcal{X}^{n} \rightarrow\left\{1,2, \ldots, 2^{n R_{1}}\right\} \\ f_{2}: \mathcal{Y}^{n} \rightarrow\left\{1,2, \ldots, 2^{n R_{2}}\right\} \end{array} f1:Xn{ 1,2,,2nR1}f2:Yn{ 1,2,,2nR2}
and a decoder map
g : { 1 , 2 , … , 2 n R 1 } × { 1 , 2 , … , 2 n R 2 } → X n × Y n g:\left\{1,2, \ldots, 2^{n R_{1}}\right\} \times\left\{1,2, \ldots, 2^{n R_{2}}\right\} \rightarrow \mathcal{X}^{n} \times \mathcal{Y}^{n} g:{ 1,2,,2nR1}×{ 1,2,,2nR2}Xn×Yn
Definition 2 (Probability of error):

The probability of error for a distributed source code is defined as
P ϵ ( n ) = Pr ⁡ ( g ( f 1 ( X n ) , f 2 ( Y n ) ) ≠ ( X n , Y n ) ) P_{\epsilon}^{(n)}=\operatorname{Pr}\left(g\left(f_{1}\left(X^{n}\right), f_{2}\left(Y^{n}\right)\right) \neq\left(X^{n}, Y^{n}\right)\right) Pϵ(n)=Pr(g(f1(

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值