Composite Hypothesis Testing


Motivation:

  • Neyman-Pearson detectors require perfect knowledge of the PDFs
  • What if this information is unknown?
  • Are there detectors for such scenarios? Radar, Sonar

Approach:

  • Design the NP detector, assuming the parameters are known
  • Manipulate the Test so that it is not dependent on the parameters

Example: DC Level in WGN with Unknown Amplitude (A>0)

Consider the DC level in WGN detection problem
H 0 : x [ n ] = w [ n ] n = 0 , 1 , … , N − 1 H 1 : x [ n ] = A + w [ n ] n = 0 , 1 , … , N − 1 \begin{array}{ll} \mathcal{H}_{0}: x[n]=w[n] & n=0,1, \ldots, N-1 \\ \mathcal{H}_{1}: x[n]=A+w[n] & n=0,1, \ldots, N-1 \end{array} H0:x[n]=w[n]H1:x[n]=A+w[n]n=0,1,,N1n=0,1,,N1
where the value of A A A is unknown, although a priori we know that A > 0 , A>0, A>0, and w [ n ] w[n] w[n] is WGN with variance σ 2 \sigma^{2} σ2. Then, the NP test is to decide H 1 \mathcal{H}_{1} H1 if
p ( x ; A , H 1 ) p ( x ; H 0 ) = 1 ( 2 π σ 2 ) N 2 exp ⁡ [ − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − A ) 2 ] 1 ( 2 π σ 2 ) N 2 exp ⁡ [ − 1 2 σ 2 ∑ n = 0 N − 1 x 2 [ n ] ] > γ \frac{p\left(\mathbf{x} ; A, \mathcal{H}_{1}\right)}{p\left(\mathbf{x} ; \mathcal{H}_{0}\right)}=\frac{\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left[-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1}(x[n]-A)^{2}\right]}{\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left[-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1} x^{2}[n]\right]}>\gamma p(x;H0)p(x;A,H1)=(2πσ2)2N1exp[2σ21n=0N1x2[n]](2πσ2)2N1exp[2σ21n=0N1(x[n]A)2]>γ
Taking the logarithm we have
A ∑ n = 0 N − 1 x [ n ] > σ 2 ln ⁡ γ + N A 2 2 A \sum_{n=0}^{N-1} x[n]>\sigma^{2} \ln \gamma+\frac{N A^{2}}{2} An=0N1x[n]>σ2lnγ+2NA2
since it is known that A > 0 , A>0, A>0, we have
∑ n = 0 N − 1 x [ n ] > σ 2 A ln ⁡ γ + N A 2 \sum_{n=0}^{N-1} x[n]>\frac{\sigma^{2}}{A} \ln \gamma+\frac{N A}{2} n=0N1x[n]>Aσ2lnγ+2NA
Finally, scaling by 1 / N 1 / N 1/N produces the test
T ( x ) = 1 N ∑ n = 0 N − 1 x [ n ] > σ 2 N A ln ⁡ γ + A 2 = γ ′ T(\mathbf{x})=\frac{1}{N} \sum_{n=0}^{N-1} x[n]>\frac{\sigma^{2}}{N A} \ln \gamma+\frac{A}{2}=\gamma^{\prime} T(x)=N1n=0N1x[n]>NAσ2lnγ+2A=γ
Clearly, the test statistic, which is the sample mean of the data, does not depend on A A A.

Recall from Chapter 3 that T ( x ; H 0 ) = x ˉ ∼ N ( 0 , σ 2 / N ) , T ( x ; H 1 ) = x ˉ ∼ N ( A , σ 2 / N ) T(\mathbf x;\mathcal H_0)=\bar x\sim \mathcal N(0,\sigma^2/N),T(\mathbf x;\mathcal H_1)=\bar x\sim \mathcal N(A,\sigma^2/N) T(x;H0)=xˉN(0,σ2/N),T(x;H1)=xˉN(A,σ2/N). Hence,
P F A = Pr ⁡ { T ( x ) > γ ′ ; H 0 } = Q ( γ ′ σ 2 / N ) P D = Pr ⁡ { T ( x ) > γ ′ ; H 1 } = Q ( γ ′ − A σ 2 / N ) = Q ( Q − 1 ( P F A ) − N A 2 σ 2 ) P_{FA}=\Pr\{T(\mathbf x)>\gamma^\prime;\mathcal H_0 \}=Q\left(\frac{\gamma^\prime}{\sqrt{\sigma^2/N}} \right)\\ P_{D}=\Pr\{T(\mathbf x)>\gamma^\prime;\mathcal H_1 \}=Q\left(\frac{\gamma^\prime-A}{\sqrt{\sigma^2/N}} \right)=Q\left(Q^{-1}(P_{FA})-\sqrt{\frac{NA^2}{\sigma^2}}\right) PFA=Pr{T(x)>γ;H0}=Q(σ2/N γ)PD=Pr{T(x)>γ;H1}=Q(σ2/N γA)=Q(Q1(PFA)σ2NA2 )
Therefore, P F A P_{FA} PFA (and the threshold) does not depend on A A A, although P D P_D PD depends on A A A.

The test ( 1 ) (1) (1) leads to the highest P D P_{D} PD (remember NP maximizes P D P_{D} PD ) for any value A A A. (as long as A > 0 A>0 A>0 ). Such a test is called a Uniformly Most Powerful (UMP) test. Any other test will have a poorer performance.

在这里插入图片描述

Unfortunately, UMP tests seldom exist.

Example: DC Level in WGN with Unknown Amplitude

Reconsider the example above with − ∞ < A < ∞ -\infty<A<\infty <A<. If we assume perfect knowledge of A A A to design a NP detector, then it termed as a clairvoyant detector.

When A A A can take on positive and negative values, the clairvoyant detector decides H 1 \mathcal{H}_{1} H1 if
1 N ∑ n = 0 N − 1 x [ n ] = x ˉ > γ + ′  for  A > 0 1 N ∑ n = 0 N − 1 x [ n ] = x ˉ < γ − ′  for  A < 0 \begin{aligned} \frac{1}{N} \sum_{n=0}^{N-1} x[n]=\bar{x}>\gamma_{+}^{\prime} \quad \text { for } A>0 \\ \frac{1}{N} \sum_{n=0}^{N-1} x[n]=\bar{x}<\gamma_{-}^{\prime}\quad \text { for } A<0 \end{aligned} N1n=0N1x[n]=xˉ>γ+ for A>0N1n=0N1x[n]=xˉ<γ for A<0
The detector is clearly unrealizable since it is composed of two different NP tests, the choice of which depends upon the unknown parameter A A A. It provides an upper bound on performance, which can be found as follows.
P F A = Pr ⁡ { x ˉ > γ + ′ ; H 0 } = Q ( γ + ′ σ 2 / N )  for  A > 0 P F A = Pr ⁡ { x ˉ < γ − ′ ; H 0 } = 1 − Q ( γ − ′ σ 2 / N ) = Q ( − γ − ′ σ 2 / N )  for  A < 0 \begin{aligned} &P_{F A}=\operatorname{Pr}\left\{\bar{x}>\gamma_{+}^{\prime} ; \mathcal{H}_{0}\right\}=Q\left(\frac{\gamma_{+}^{\prime}}{\sqrt{\sigma^{2} / N}}\right) &&\text { for } A>0\\ &P_{F A}=\operatorname{Pr}\left\{\bar{x}<\gamma_{-}^{\prime} ; \mathcal{H}_{0}\right\}=1-Q\left(\frac{\gamma_{-}^{\prime}}{\sqrt{\sigma^{2} / N}}\right)=Q\left(\frac{-\gamma_{-}^{\prime}}{\sqrt{\sigma^{2} / N}}\right) &&\text { for } A<0 \end{aligned} PFA=Pr{xˉ>γ+;H0}=Q(σ2/N γ+)PFA=Pr{xˉ<γ;H0}=1Q(σ2/N γ)=Q(σ2/N γ) for A>0 for A<0

P D = Pr ⁡ { x ˉ > γ + ′ ; H 1 } = Q ( γ + ′ − A σ 2 / N ) = Q ( Q − 1 ( P F A ) − N A 2 σ 2 )  for  A > 0 P D = 1 − Q ( γ − ′ − A σ 2 / N ) = Q ( − γ − ′ + A σ 2 / N ) = Q ( Q − 1 ( P F A ) + A σ 2 / N )  for  A < 0 \begin{aligned} &P_{D}=\operatorname{Pr}\left\{\bar{x}>\gamma_{+}^{\prime} ; \mathcal{H}_{1}\right\}=Q\left(\frac{\gamma_{+}^{\prime}-A}{\sqrt{\sigma^{2} / N}}\right)=Q\left(Q^{-1}\left(P_{F A}\right)-\sqrt{\frac{N A^{2}}{\sigma^{2}}}\right) && \text { for } A>0\\ &P_{D}=1-Q\left(\frac{\gamma_{-}^{\prime}-A}{\sqrt{\sigma^{2} / N}}\right)=Q\left(\frac{-\gamma_{-}^{\prime}+A}{\sqrt{\sigma^{2} / N}}\right)=Q\left(Q^{-1}\left(P_{F A}\right)+\frac{A}{\sqrt{\sigma^{2} / N}}\right)&& \text { for } A<0 \end{aligned} PD=Pr{xˉ>γ+;H1}=Q(σ2/N γ+A)=Q(Q1(PFA)σ2NA2 )PD=1Q(σ2/N γA)=Q(σ2/N γ+A)=Q(Q1(PFA)+σ2/N A) for A>0 for A<0

在这里插入图片描述

Instead of the clairvoyant detector, let’s look at the realizable detector:
T ( x ) = ∣ 1 N ∑ n = 0 N − 1 x [ n ] ∣ > γ ′ ′ T(\mathbf x)=\left|\frac{1}{N}\sum_{n=0}^{N-1}x[n] \right|>\gamma^{\prime \prime} T(x)=N1n=0N1x[n]>γ
Then the detection performance
P F A = Pr ⁡ { ∣ x ˉ ∣ > γ ′ ′ ; H 0 } = 2 Pr ⁡ { x ˉ > γ ′ ′ ; H 0 } = 2 Q ( γ ′ ′ σ 2 / N ) γ ′ ′ = σ 2 / N Q − 1 ( P F A / 2 ) P D = Pr ⁡ { ∣ x ˉ ∣ > γ ′ ′ ; H 1 } = Q ( Q − 1 ( P F A / 2 ) − A σ 2 / N ) + Q ( Q − 1 ( P F A / 2 ) + A σ 2 / N ) \begin{aligned} P_{F A}&=\operatorname{Pr}\left\{|\bar{x}|>\gamma^{\prime \prime} ; \mathcal{H}_{0}\right\}=2 \operatorname{Pr}\left\{\bar{x}>\gamma^{\prime \prime} ; \mathcal{H}_{0}\right\}=2 Q\left(\frac{\gamma^{\prime \prime}}{\sqrt{\sigma^{2} / N}}\right) \\ \gamma^{\prime \prime}&=\sqrt{\sigma^{2} / N} Q^{-1}\left(P_{F A} / 2\right) \\ P_{D}=\operatorname{Pr}\left\{|\bar{x}|>\gamma^{\prime \prime} ; \mathcal{H}_{1}\right\}&=Q\left(Q^{-1}\left(P_{F A} / 2\right)-\frac{A}{\sqrt{\sigma^{2} / N}}\right)+Q\left(Q^{-1}\left(P_{F A} / 2\right)+\frac{A}{\sqrt{\sigma^{2} / N}}\right) \end{aligned} PFAγPD=Pr{xˉ>γ;H1}=Pr{xˉ>γ;H0}=2Pr{xˉ>γ;H0}=2Q(σ2/N γ)=σ2/N Q1(PFA/2)=Q(Q1(PFA/2)σ2/N A)+Q(Q1(PFA/2)+σ2/N A)

在这里插入图片描述

The performance of this realizable detector is thus not optimal, but close to the optimal clairvoyant detector.

In fact, the proposed detector is an example of a more general approach to composite hypothesis testing, the generalized likelihood ratio test, which is described in the next section.

Composite Hypothesis Testing Approaches

Bayesian Approach

The Bayesian approach assigns prior PDFs to θ 0 \boldsymbol\theta_{0} θ0 and θ 1 \boldsymbol\theta_{1} θ1. In doing so it models the unknown parameters as realizations of a vector random variable. If the prior PDFs are denoted by p ( θ 0 ) p\left(\boldsymbol\theta_{0}\right) p(θ0) and p ( θ 1 ) , p\left(\boldsymbol\theta_{1}\right), p(θ1), respectively, the PDFs of the data are
p ( x ; H 0 ) = ∫ p ( x ∣ θ 0 ; H 0 ) p ( θ 0 ) d θ 0 p ( x ; H 1 ) = ∫ p ( x ∣ θ 1 ; H 1 ) p ( θ 1 ) d θ 1 \begin{aligned} p\left(\mathbf{x} ; \mathcal{H}_{0}\right) &=\int p\left(\mathbf{x} |\boldsymbol{\theta}_{0} ; \mathcal{H}_{0}\right) p\left(\boldsymbol{\theta}_{0}\right) d \boldsymbol{\theta}_{0} \\ p\left(\mathbf{x} ; \mathcal{H}_{1}\right) &=\int p\left(\mathbf{x} |\boldsymbol{\theta}_{1} ; \mathcal{H}_{1}\right) p\left(\boldsymbol{\theta}_{1}\right) d \boldsymbol{\theta}_{1} \end{aligned} p(x;H0)p(x;H1)=p(xθ0;H0)p(θ0)dθ0=p(xθ1;H1)p(θ1)dθ1
where p ( x ∣ θ i ; H i ) p\left(\mathbf{x} |\boldsymbol\theta_{i} ; \mathcal{H}_{i}\right) p(xθi;Hi) is the conditional PDF of x , \mathbf{x}, x, conditioned on θ i , \boldsymbol{\theta}_{i}, θi, assuming H i \mathcal{H}_{i} Hi is true. The unconditional PDFs p ( x ; H 0 ) p\left(\mathbf{x} ; \mathcal{H}_{0}\right) p(x;H0) and p ( x ; H 1 ) p\left(\mathbf{x} ; \mathcal{H}_{1}\right) p(x;H1) are now completely specified, no longer dependent on the unknown parameters. With the Bayesian approach the optimal NP detector decides H 1 \mathcal{H}_{1} H1 if
p ( x ; H 1 ) p ( x ; H 0 ) = ∫ p ( x ∣ θ 1 ; H 1 ) p ( θ 1 ) d θ 1 ∫ p ( x ∣ θ 0 ; H 0 ) p ( θ 0 ) d θ 0 > γ \frac{p\left(\mathbf{x} ; \mathcal{H}_{1}\right)}{p\left(\mathbf{x} ; \mathcal{H}_{0}\right)}=\frac{\int p\left(\mathbf{x}| \boldsymbol{\theta}_{1} ; \mathcal{H}_{1}\right) p\left(\boldsymbol{\theta}_{1}\right) d \boldsymbol{\theta}_{1}}{\int p\left(\mathbf{x} |\boldsymbol\theta_{0} ; \mathcal{H}_{0}\right) p\left(\boldsymbol\theta_{0}\right) d \boldsymbol{\theta}_{0}}>\gamma p(x;H0)p(x;H1)=p(xθ0;H0)p(θ0)dθ0p(xθ1;H1)p(θ1)dθ1>γ

  • Need to choose prior pdf.
  • Integration can be difficult.

Generalized Likelihood Ratio Test (GLRT)

The GLRT replaces the unknown parameters by their maximum likelihood estimates (MLEs). In general, the GLRT decides H 1 \mathcal{H}_{1} H1 if
L G ( x ) = p ( x ; θ ^ 1 , H 1 ) p ( x ; θ ^ 0 , H 0 ) > γ L_{G}(\mathbf{x})=\frac{p\left(\mathbf{x} ; \hat{\boldsymbol\theta}_{1}, \mathcal{H}_{1}\right)}{p\left(\mathbf{x} ; \hat{\boldsymbol\theta}_{0}, \mathcal{H}_{0}\right)}>\gamma LG(x)=p(x;θ^0,H0)p(x;θ^1,H1)>γ
where θ ^ 1 \hat{\boldsymbol\theta}_{1} θ^1 is the MLE of θ 1 \boldsymbol\theta_{1} θ1 assuming H 1 \mathcal{H}_{1} H1 is true (maximizes p ( x ; θ 1 , H 1 ) ) , \left.p\left(\mathbf{x} ; \theta_{1}, \mathcal{H}_{1}\right)\right), p(x;θ1,H1)), and θ ^ 0 \hat{\boldsymbol\theta}_{0} θ^0 is the MLE of θ 0 \boldsymbol\theta_{0} θ0 assuming H 0 \mathcal{H}_{0} H0 is true (maximizes $ p\left(\mathbf{x} ; \boldsymbol{\theta}{0}, \mathcal{H}{0}\right)$).

The GLRT can also be expressed in another form, which is sometimes more convenient. since θ ^ i \hat{\boldsymbol\theta}_{i} θ^i is the MLE under H i , \mathcal{H}_{i}, Hi, it maximizes p ( x ; θ i , H i ) p\left(\mathbf{x} ; \boldsymbol{\theta}_{i}, \mathcal{H}_{i}\right) p(x;θi,Hi) or
p ( x ; θ ^ i , H i ) = max ⁡ θ i p ( x ; θ i , H i ) p\left(\mathbf{x} ; \hat{\boldsymbol{\theta}}_{i}, {\mathcal { H }}_{i}\right)=\max _{\boldsymbol{\theta}_{i}} p\left(\mathbf{x} ; \boldsymbol{\theta}_{i}, \mathcal{H}_{i}\right) p(x;θ^i,Hi)=θimaxp(x;θi,Hi)
Hence, L G ( x ) L_G(\mathbf x) LG(x) can be written as
L G ( x ) = max ⁡ θ 1 p ( x ; θ 1 , H 1 ) max ⁡ θ 0 p ( x ; θ 0 , H 0 ) L_G(\mathbf x)=\frac{\max _{\boldsymbol{\theta}_{1}} p\left(\mathbf{x} ; \boldsymbol{\theta}_{1}, \mathcal{H}_{1}\right)}{\max _{\boldsymbol{\theta}_{0}} p\left(\mathbf{x} ; \boldsymbol{\theta}_{0}, \mathcal{H}_{0}\right)} LG(x)=maxθ0p(x;θ0,H0)maxθ1p(x;θ1,H1)
The approach also provides information about the unknown parameters since the first step in determining L G ( x ) L_{G}(\mathbf{x}) LG(x) is to find the MLEs. We now continue the DC level in WGN example.

Example: DC Level in WGN with Unknown Amplitude - GLRT

In this case we have θ 1 = A \boldsymbol\theta_{1}=A θ1=A and there are no unknown parameters under H 0 \mathcal{H}_{0} H0. The hypothesis test becomes
H 0 : A = 0 H 1 : A ≠ 0 \begin{array}{l} \mathcal{H}_{0}: A=0 \\ \mathcal{H}_{1}: A \neq 0 \end{array} H0:A=0H1:A=0
Thus, the GLRT decides H 1 \mathcal{H}_{1} H1 if
L G ( x ) = p ( x ; A ^ , H 1 ) p ( x ; H 0 ) > γ L_{G}(\mathbf{x})=\frac{p\left(\mathbf{x} ; \hat{A}, \mathcal{H}_{1}\right)}{p\left(\mathbf{x} ; \mathcal{H}_{0}\right)}>\gamma LG(x)=p(x;H0)p(x;A^,H1)>γ
The MLE of A A A is found by maximizing
p ( x ; A , H 1 ) = 1 ( 2 π σ 2 ) N 2 exp ⁡ [ − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − A ) 2 ] p\left(\mathbf{x} ; A, \mathcal{H}_{1}\right)=\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left[-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1}(x[n]-A)^{2}\right] p(x;A,H1)=(2πσ2)2N1exp[2σ21n=0N1(x[n]A)2]
By differentiating the likelihood (or loglikelihood) function and setting the derivative to zero, wen obtain the MLE A ^ = x ˉ \hat{A}=\bar{x} A^=xˉ. Thus,
L G ( x ) = 1 ( 2 π σ 2 ) N 2 exp ⁡ [ − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − x ˉ ) 2 ] 1 ( 2 π σ 2 ) N 2 exp ⁡ ( − 1 2 σ 2 ∑ n = 0 N − 1 x 2 [ n ] ) L_{G}(\mathbf{x})=\frac{\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left[-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1}(x[n]-\bar{x})^{2}\right]}{\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left(-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1} x^{2}[n]\right)} LG(x)=(2πσ2)2N1exp(2σ21n=0N1x2[n])(2πσ2)2N1exp[2σ21n=0N1(x[n]xˉ)2]
Taking logarithms we have
ln ⁡ L G ( x ) = − 1 2 σ 2 ( ∑ n = 0 N − 1 x 2 [ n ] − 2 x ˉ ∑ n = 0 N − 1 x [ n ] + N x ˉ 2 − ∑ n = 0 N − 1 x 2 [ n ] ) = − 1 2 σ 2 ( − 2 N x ˉ 2 + N x ˉ 2 ) = N x ˉ 2 2 σ 2 \begin{aligned} \ln L_{G}(\mathbf{x}) &=-\frac{1}{2 \sigma^{2}}\left(\sum_{n=0}^{N-1} x^{2}[n]-2 \bar{x} \sum_{n=0}^{N-1} x[n]+N \bar{x}^{2}-\sum_{n=0}^{N-1} x^{2}[n]\right) \\ &=-\frac{1}{2 \sigma^{2}}\left(-2 N \bar{x}^{2}+N \bar{x}^{2}\right) \\ &=\frac{N \bar{x}^{2}}{2 \sigma^{2}} \end{aligned} lnLG(x)=2σ21(n=0N1x2[n]2xˉn=0N1x[n]+Nxˉ2n=0N1x2[n])=2σ21(2Nxˉ2+Nxˉ2)=2σ2Nxˉ2
or we decide H 1 \mathcal{H}_{1} H1 if
∣ x ˉ ∣ > γ ′ |\bar{x}|>\gamma^{\prime} xˉ>γ
This detector is identical to realizable detector we looked at before and the performance has already been given.

Example: DC Level in WGN with Unknown Amplitude and Variance - GLRT

Consider the detection problem
H 0 : x [ n ] = w [ n ] n = 0 , 1 , … , N − 1 H 1 : x [ n ] = A + w [ n ] n = 0 , 1 , … , N − 1 \begin{array}{ll} \mathcal{H}_{0}: x[n]=w[n] & n=0,1, \ldots, N-1 \\ \mathcal{H}_{1}: x[n]=A+w[n] & n=0,1, \ldots, N-1 \end{array} H0:x[n]=w[n]H1:x[n]=A+w[n]n=0,1,,N1n=0,1,,N1
where A A A is unknown with − ∞ < A < ∞ -\infty<A<\infty <A< and w [ n ] w[n] w[n] is WGN with unknown variance σ 2 \sigma^{2} σ2. A UMP test does not exist because the equivalent parameter test is
H 0 : A = 0 , σ 2 > 0 H 1 : A ≠ 0 , σ 2 > 0 \begin{array}{l} \mathcal{H}_{0}: A=0, \sigma^{2}>0 \\ \mathcal{H}_{1}: A \neq 0, \sigma^{2}>0 \end{array} H0:A=0,σ2>0H1:A=0,σ2>0
which is two-sided. The GLRT decides H 1 \mathcal{H}_{1} H1 if
L G ( x ) = p ( x ; A ^ , σ ^ 1 2 , H 1 ) p ( x ; σ ^ 0 2 , H 0 ) > γ L_{G}(\mathbf{x})=\frac{p\left(\mathbf{x} ; \hat{A}, \hat{\sigma}_{1}^{2}, \mathcal{H}_{1}\right)}{p\left(\mathbf{x} ; \hat{\sigma}_{0}^{2}, \mathcal{H}_{0}\right)}>\gamma LG(x)=p(x;σ^02,H0)p(x;A^,σ^12,H1)>γ
where [ A ^    σ ^ 2 ] T [\hat A~~\hat{\sigma}^2]^T [A^  σ^2]T is the MLE of the vector parameter θ 1 = [ A    σ 2 ] T \boldsymbol \theta_1=[A~~\sigma^2]^T θ1=[A  σ2]T under H 1 \mathcal H_1 H1, and σ ^ 0 2 \hat{\sigma}^2_0 σ^02 is the MLE of the parameter θ 0 = σ 2 \boldsymbol \theta_0=\sigma^2 θ0=σ2 under H 0 \mathcal H_0 H0. Note that we need to estimate the variance under both hypotheses.

Since
p ( x ; A , σ 2 , H 1 ) = 1 ( 2 π σ 2 ) N 2 exp ⁡ [ − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − A ) 2 ] p ( x ; A , σ 2 , H 1 ) = 1 ( 2 π σ 2 ) N 2 exp ⁡ [ − 1 2 σ 2 ∑ n = 0 N − 1 x 2 [ n ] ] p\left(\mathbf{x} ; A, \sigma^{2}, \mathcal{H}_{1}\right)=\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left[-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1}(x[n]-A)^{2}\right]\\ p\left(\mathbf{x} ; A, \sigma^{2}, \mathcal{H}_{1}\right)=\frac{1}{\left(2 \pi \sigma^{2}\right)^{\frac{N}{2}}} \exp \left[-\frac{1}{2 \sigma^{2}} \sum_{n=0}^{N-1}x^{2}[n]\right] p(x;A,σ2,H1)=(2πσ2)2N1exp[2σ21n=0N1(x[n]A)2]p(x;A,σ2,H1)=(2πσ2)2N1exp[2σ21n=0N1x2[n]]
Similar as before, we have
A ^ = x ˉ σ ^ 1 2 = 1 N ∑ n = 1 N − 1 ( x [ n ] − x ˉ ) 2 σ ^ 0 2 = 1 N ∑ n = 1 N − 1 x 2 [ n ] \hat A=\bar x\\ \hat {\sigma}^2_1=\frac{1}{N}\sum_{n=1}^{N-1}(x[n]-\bar x)^2\\ \hat {\sigma}^2_0=\frac{1}{N}\sum_{n=1}^{N-1}x^2[n] A^=xˉσ^12=N1n=1N1(x[n]xˉ)2σ^02=N1n=1N1x2[n]
Thus the GLRT becomes
L G ( x ) = ( σ ^ 0 2 σ ^ 1 2 ) N / 2 L_{G}(\mathbf{x})=\left(\frac{\hat{\sigma}_{0}^{2}}{\hat{\sigma}_{1}^{2}}\right)^{N / 2} LG(x)=(σ^12σ^02)N/2
In essence, the GLRT decides H 1 \mathcal H_1 H1 if the fit to the data of the signal A ^ = x ˉ \hat A= \bar x A^=xˉ produces a much smaller error, as measured by σ ^ 1 2 = ( 1 / N ) ∑ n = 0 N − 1 ( x [ n ] − A ^ ) 2 \hat {\sigma}^2_1=(1/N)\sum_{n=0}^{N-1}(x[n]-\hat A)^2 σ^12=(1/N)n=0N1(x[n]A^)2 than a fit of no signal or σ ^ 0 2 = ( 1 / N ) ∑ n = 0 N − 1 x 2 [ n ] \hat {\sigma}^2_0=(1/N)\sum_{n=0}^{N-1}x^2[n] σ^02=(1/N)n=0N1x2[n]. A slightly more intuitive form can be found as follows. Since
σ ^ 1 2 = 1 N ∑ n = 1 N − 1 x 2 [ n ] − x ˉ 2 = σ ^ 0 2 − x ˉ 2 \hat{\sigma}_{1}^{2}=\frac{1}{N}\sum_{n=1}^{N-1}x^2[n]-\bar x^2=\hat{\sigma}_{0}^{2}-\bar x ^2 σ^12=N1n=1N1x2[n]xˉ2=σ^02xˉ2
we have
2 ln ⁡ L G ( x ) = N ln ⁡ ( σ ^ 1 2 + x ˉ 2 σ ^ 1 2 ) = N ln ⁡ ( 1 + x ˉ 2 σ ^ 1 2 ) 2\ln L_G(\mathbf x)=N\ln \left(\frac{\hat{\sigma}_{1}^{2}+\bar x^2}{\hat{\sigma}_{1}^{2}} \right)=N\ln \left(1+\frac{\bar x^2}{\hat{\sigma}_{1}^{2}} \right) 2lnLG(x)=Nln(σ^12σ^12+xˉ2)=Nln(1+σ^12xˉ2)

Since ln ⁡ ( 1 + x ) \ln(1 + x) ln(1+x) is monotonically increasing with increasing x x x, an equivalent test statistic is
T ( x ) = A ^ 1 2 σ ^ 1 2 T(\mathrm{x})=\frac{\hat{A}_{1}^{2}}{\hat{\sigma}_{1}^{2}} T(x)=σ^12A^12

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值