Channel Capacity 2: Channel Coding Theorem

本文介绍了信道编码定理的基本概念,包括预处理、联合典型序列、信道编码定理、带有反馈的信道和源-信道分离。讨论了在信息理论中,如何通过联合典型序列进行解码,并证明了信道容量与操作容量相等,可以实现任意小的错误概率。此外,还探讨了反馈信道的容量和源-信道分离定理。
摘要由CSDN通过智能技术生成

Reference:

Elements of Information Theory, 2nd Edition

Slides of EE4560, TUD

Preliminaries

We analyze a communication system as shown in Figure 7.8.

在这里插入图片描述

  • W W W: message drawn from the index set { 1 , 2 , ⋯   , M } \{1,2,\cdots,M\} { 1,2,,M}
  • X n = f ( W ) X^n=f(W) Xn=f(W): transmitted sequence, consisting of n n n symbols from the channel input alphabet X \mathcal X X
  • Y n ∼ p ( y n ∣ x n ) Y^n\sim p(y^n|x^n) Ynp(ynxn): received sequence, consisting of n n n symbols from the channel output alphabet Y \mathcal Y Y
  • W ^ = g ( Y n ) \hat W=g(Y^n) W^=g(Yn): message from the index set { 1 , 2 , ⋯   , M } \{1,2,\cdots,M\} { 1,2,,M}; decoding error in case W ^ ≠ W \hat W \neq W W^=W

Definition 1 (Code):

An ( M , n ) (M,n) (M,n) code for the channel ( X , p ( y ∣ x ) , Y ) (\mathcal X,p(y|x),\mathcal Y) (X,p(yx),Y) consists of

  • An index set { 1 , 2 , ⋯   , M } \{1,2,\cdots,M\} { 1,2,,M}
  • An encoding function f : { 1 , 2 , ⋯   , M } → X n f:\{1,2,\cdots,M\}\to \mathcal X^n f:{ 1,2,,M}Xn, yielding codewords x n ( 1 ) , ⋯   , x n ( M ) x^n(1),\cdots, x^n(M) xn(1),,xn(M). The set of codewords is called the codebook
  • A decoding function g : Y n → { 1 , 2 , ⋯   , M } g:\mathcal Y^n\to \{1,2,\cdots,M\} g:Yn{ 1,2,,M}

Example [slides 5-7]

Definition 2 (Conditional Probability of Error):

Conditional probability of error given that index i i i was sent:
λ i = Pr ⁡ ( g ( Y n ) ≠ i ∣ X n = x n ( i ) ) = ∑ y n p ( y n ∣ x n ( i ) ) I ( g ( y n ) ≠ i ) (1) \begin{aligned} \lambda_i&=\Pr (g(Y^n)\ne i|X^n=x^n(i))=\sum_{y^n}p(y^n|x^n(i))I(g(y^n)\ne i) \end{aligned}\tag{1} λi=Pr(g(Yn)=iXn=xn(i))=ynp(ynxn(i))I(g(yn)=i)(1)
where I ( ⋅ ) I(\cdot) I() is the indicator function. ( x n x^n xn and y n y^n yn are the possible realizations of X n X^n Xn and Y n Y^n Yn)

Maximal probability of error:
λ ( n ) = max ⁡ i ∈ { 1 , 2 , ⋯   , M } λ i (2) \lambda^{(n)}=\max _{i\in \{1,2,\cdots,M\}} \lambda_i \tag{2} λ(n)=i{ 1,2,,M}maxλi(2)
Average probability of error:

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值