Statistical Signal Processing (UESTC)

Teacher's email: whxiong@uestc.edu.cn

熊文汇老师

使用教材:

由于是用英文教学,所以术语以及相关描述等都会使用英语。

In this lecture, we talk about 2 problems detection and estimation.

Detection: we have several hypothesis, and we need to confirm which it is. Or maybe we need to confirm whether it's something which means yes or no.

Estimation: we need to ask what the value is. we should estimate a deterministic signal. And we can estimate a random signal.

Vectorization

Sometimes we can find that a signal can be decomposinged. Like:

x_i is weight factor. Fin_i(t) is base function.

Random signal

For any given time we can calculate its mean(expectation)

And we can compute its auto-correlation(ACF)

Power spectrum density(PSD)

For additive white gaussian noise(AWGN), Its PSD is a constant from every frequency.

And we know that PSD's inverse fourier transform is auto-correlation(ACF). so for additive white gaussian noise(AWGN), its ACF is impulse function and ACF = 0 while k!= 0.

Detection

We make some notations here.

P(x)is a probability density function !

Prior knowledge:

H_0 \ nothing \ but \ noise

H_1 \ something\ with\ noise

P(H_1;H_0)=P_{FA} \ False \ alarm

P(H_0;H_1)=1-P_{D} \\ P_{D}=detection \ probability

 

And we assume  that H_1 \ or \ H_0 \ is \ true. Then first of all, in this situation H_0 or H_1 happens for sure, we can use NP (Neyman-Pearson) theorm.

Detect H_1 if L(x) = P(x;H_1) / P(x;H_0) > \gamma. P(x;H_1) and P(x;H_0) are our observations and it's known. They are like the product of a set of probability density function(PDF).

The problem is how we choose \gamma.   

P_{FA} = \int_{x:L(x)>\gamma} P(x:H_0)dx

We can fix the P_FA then we can have a \gamma. our detector's mission is to draw a boundary (\gamma).

And we also have 

P_{D} = \int_{x:L(x)>\gamma} P(x:H_1)dx

And we can not minimize P_FA and P_D simultaneously.

For example, H1 is white noise with DC while H0 is just white noise.

 

The above NP theorm is about H1 or H0 happen for sure. Now we think about H0 or H1 happen with some probability.

Bayesian approach

P_e = P(H_0)\int_{x\in R_1}P(x|H_0)dx + P(H_1)\int _{x \in R_0}P(x|H_1)dx

P(H_0) and P(H_1) \ dont \ change. \ They \ are\ prior\ probability.

P(x|H_0) \ and\ P(x|H_1) \ dont \ change. \ They \ are\ observation.

And we need to minimize the Pe.

The above method we minimize the Pe(Probability of error). Now we choose to minimize the bayes risk.

After the calculus. we have

Decide H1 if 

Now we assume that we have multiple hypothesis(H_1\ ,H_2\ , H_3\ ... H_n)

 

Or according to maximum likelyhood(ML) we have:

Matched filter

We also make some notations here.

H_0:x[0]\ = w[n]

H_1:x[0]\ = w[n]+s[n]

s[n]\ is \ known

w[n]\ is \ AWGN\ (Additive\ white\ gaussian\ noise)

Based on NP-theorm mentioned above we have

(The calculs needed to be added in next week)

And we use Pfa & SNR to measure it.

Homework

Before we decide a detector it's important to think about a question.

s[n] is a deterministic or random. s[n] is deterministic meaning given a n we can have a value like 1 or 22 or 30. s[n] is random meaning given a n we have a random variable.

Facing these 2 situations, their likehood functions will differ. If s[n] is a deterministic signal then s[n] just changes the mean of w[n]. while if s[n] is random, it has a covariance matrix which will affect the variance matrix.If s[n]'s covariance matrix is Cs and its u = u_1. w[n] + s[n] ~ N(u_1 + u_0 , sigma0^2*E + C_s). Assume that w[n] ~ N(u_0 , sigma0^2 ).

As mentioned above, when our signal is deterministic we can build a matched filter.

while the s[n] is a stochastic process we have 

 

Likelyhood ratio test(design a NP detector)

There are 2 different situation. One point observation or multy points observatoins. If the question doesn't mention we can work out the question in one of these two observations.

One point observation:

The likelyhood becomes a pdf and often we have P_fa = Pr{x[0]>gamma;H_0};P_D = Pr{x[0]>gamma;H_1} .

Multy point observations:

The likelyhood becomes the product of the observations. And often we have P_fa = Pr{T(x)>gamma;H_0};P_D = Pr{T(x)>gamma;H_1} .

REMEMBER while we make intergration to compute P_fa or P_D, the X_i can be treated as the random variable of H_0(if computing P_fa) or H_1 (if computing P_D).

while we're having the problem detect DC in AWGN, we have a important equation:

Assume that we have two hypotheses,

H_0 :x[n]=w[n]\\

H_1:x[n]=s[n]+w[n]

where w[n] is WGN with variance of \sigma^2, s[n] is something.

we decide H1 if T(x)>\gamma^'

where {\color{Red} T(x) = \sum_{n=0}^{N-1}x[n]s[n]} , (T(x) is important).  \gamma^{'} = \sigma^2 ln\gamma + 1/2 \sum_{n=0}^{N-1}s^2[n]; Pay attention to the T(x).

P_D=Q(Q^{-1}(P_{FA})-\sqrt{\frac{\varepsilon}{\sigma^2}}); Pay attention to the P_D.

 

 

while we're detecting gaussian distribution as x ~ N (μ,C). The likelyhood should become

N should be the number of your observations which should also should be the number of the dimention of μ. s should be the μ here.

C is a Symmetric positive definite matrix. And X^T*C^(-1)*s = (X^T*C^(-1)*s)^T

Design a MAP detector

While we are talking minimum the probability of error or optimal, we should design a MAP detector. So while we are making a MAP detector our goal is to minimize the probability of error. For NP detector we should maximize the probability of detection.

An example with equal prior probability on how to compute p_e 

It's almost the same as NP detector except we decide H_1 if likelyhood function L(x) > P(H_0)/P(H_1) = gamma

P(H_0) & P(H_1) are prior probability which are known. 

If the gaussian distribution's sigma is big. The distribution would be shorter and fatter.

Prewhitener

C^{-1} = D^TD

 Where C is a covariance matrix, D is a prewhitener. D is a upper triangle matrix and its diagonal elements are all positive.

Boundary

In order to draw a boudary, we need to make a detector and we decide H_i then draw a boudary in a picture with the axes of observation.

CRLB estimator

If we want to use the CRLB ,we should first comput  and check whether it's zero or not. If it's zero, then we can use CRLB.

CRLB is the variance lower bound of a unbiased estimator 

(2)

where g(x) is a MVU estimator which variance can be higher than CRLB, and its variance is 1/I(theta).

Linear model

We can get the estimator

Sufficent statistic

When we're applying RBLS, and we want to find T(x). We can do a variable substitution to make it clear which term can be used to be a sufficent statistic.

BLUE

BLUE means best linear unbiased estimator which best means lowest variance for linear estimator.

First, we have 

then 

 

MLE

Its aim is to make 

LSE

s[n] = H * theta

H is a N*P matrix, which means we have N samples and we have P unknown.

Integration knowlege

X $\sim$ N(\mu, \sigma^2) a,b are real numbers, then

aX+b ~ N(a\mu +b,(a\sigma)^2)

In order to calculate pr{X>\gamma},  we should make X~N(0,1) , (For example Y=ax+b) then the Q function variable should be replaced with ax+b.

Sometimes, we need to compute the chi square integration (\sum _{i=0}^{N-1} X_i where X_i ~N(0,1)), and we fave a formular to compute N =1

Pr\{T(x) > \gamma^ {'} \} = 2Q(\sqrt {\gamma^' } ) where T(x) $\sim$ \lambda_1 ^2 

 

Z= X + Y, if both X,Y are independent the pdf of Z is

f_z(z)=\int_{ \infty }^{+\infty} f_X(z-y)f_Y(y)dy

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值