The proof of “chi-square statistics follows chi-square distribution”

chi-square test(principle used in C4.5’s CVP Pruning),
also called chi-square statistics,
also called chi-square goodness-of fit

here is the contingency table:
在这里插入图片描述

The target is to prove:
i=1i=rj=1j=s[XijNi(Njn)]2Ni(Njn)χ2[(r1)(s1)]\sum_{i=1}^{i=r} \sum_{j=1}^{j=s}\frac{[X_{ij}-N_{i·}(\frac{N_{·j}}{n})]^2}{N_{i·}(\frac{N_j}{n})}\sim \chi^2{[(r-1)(s-1)]}①

Note:
the left side of above is “discrete”
the right side of above is “continuous”
----------------------------------------------
Let’s review the concepts of “Multi-dimensional Normal Distribution”,
according to[1]

XN(μ,)X\sim N(\mu,\sum)
μ=[E[X1],E[X2],,E[Xs]]T\mu=[E[X_1],E[X_2],···,E[X_s]]^T
=:[Cov[Xi,Xj];1i,js]\sum=: [Cov[X_i,X_j];1≤i,j≤s]

-----------------------------------------------------------------------------------------------

j=1j=s[XijNi(Njn)]2Ni(Njn)\sum_{j=1}^{j=s}\frac{[X_{ij}-N_{i·}(\frac{N_{·j}}{n})]^2}{N_{i·}(\frac{N_{·j}}{n})}

=Nij=1j=s[XijNi(Njn)]2(Njn)N_{i·}\sum_{j=1}^{j=s}\frac{[\frac{X_{ij}}{N_{i·}}-(\frac{N_{·j}}{n})]^2}{(\frac{N_{·j}}{n})}

=Ni{[j=1j=s1[XijNi(Njn)]2Njn]+[XisNi(Nsn)]2Nsn}N_{i·}\{[\sum_{j=1}^{j=s-1}\frac{[\frac{X_{ij}}{N_{i·}}-(\frac{N_{·j}}{n})]^2}{\frac{N_{·j}}{n}}]+ \frac{[\frac{X_{is}}{N_{i·}}-(\frac{N_{·s}}{n})]^2} {\frac{N_{·s}}{n}}\}

=Ni{[j=1j=s1[XijNi(Njn)]2NjNi]+[j=1j=s1(XijNiNjn)]2NsNi}N_{i·} \{[\sum_{j=1}^{j=s-1}\frac{[ \frac{X_{ij}}{N_{i·}}-(\frac{N_{·j}}{n})]^2 }{ \frac{N_{·j}}{N_{i·}} }]+ \frac{[\sum_{j=1}^{j=s-1}(\frac{X_{ij}}{N_{i·}}-\frac{N_{·j}}{n})]^2}{{\frac{Ns}{N_{i·}}}} \}

Let’s set
p=(N1n,...,N(s1)n)Tp^*=(\frac{N_{·1}}{n},...,\frac{N_{·(s-1)}}{n})^T

X=(Xi1Ni,,Xi(s1)Ni)T\overline{X}^*=(\frac{X_{i1}}{N_{i·}},···,\frac{X_{i(s-1)}}{N_{i·}})^T

So,
Nij=1j=s[XijNi(Njn)]2(Njn)N_{i·}\sum_{j=1}^{j=s}\frac{[\frac{X_{ij}}{N_{i·}}-(\frac{N_{·j}}{n})]^2}{(\frac{N_{·j}}{n})}

=Ni(Xp)T()1(Xp)=N_{i·}(\overline{X}^*-p^*)^T(\sum^*)^{-1}(\overline{X}^*-p^*)
where =\sum^*=

[p1000p2000ps1][p1p2ps1][p1p2ps1]T \left[ \begin{matrix} p_1 & 0 & ···&0 \\ 0 & p_2 & ···&0 \\ \vdots & \vdots & \ddots&\vdots\\ 0&0&···&p_{s-1} \end{matrix} \right]- \left[ \begin{matrix} p_1 \\ p_2 \\ \vdots \\ p_{s-1} \end{matrix} \right] \left[ \begin{matrix} p_1 \\ p_2 \\ \vdots \\ p_{s-1} \end{matrix} \right]^T

According to Sherman-Morison Formula:
()1=(\sum^*)^{-1}=

[1p10001p20001ps1]1ps[111111111] \left[ \begin{matrix} \frac{1}{p_1} & 0 & ···&0 \\ 0 & \frac{1}{p_2} & ···&0 \\ \vdots & \vdots & \ddots&\vdots\\ 0&0&···&\frac{1}{p_{s-1}} \end{matrix} \right] -\frac{1}{p_s} \left[ \begin{matrix} 1 & 1 & ···&1 \\ 1 & 1 & ···&1 \\ \vdots & \vdots & \ddots&\vdots\\ 1&1&···&1 \end{matrix} \right]

Let’s set Yi=NiXpY_i=\sqrt{N_{i·}}\frac{\overline{X}^*-p^*}{\sqrt{\sum^*}}②
according [3]:
------------------------the following are from wikipedia-------------------------------
[X1(1)X1(k)]+[X2(1)X2(k)]++[Xn(1)Xn(k)]=[i=1n[Xi(1)]i=1n[Xi(k)]]=i=1nXi{\displaystyle {\begin{bmatrix}X_{1(1)}\\\vdots \\X_{1(k)}\end{bmatrix}}+{\begin{bmatrix}X_{2(1)}\\\vdots \\X_{2(k)}\end{bmatrix}}+\cdots +{\begin{bmatrix}X_{n(1)}\\\vdots \\X_{n(k)}\end{bmatrix}} ={\begin{bmatrix}\sum _{i=1}^{n}\left[X_{i(1)}\right]\\\vdots \\\sum _{i=1}^{n}\left[X_{i(k)}\right]\end{bmatrix}}=\sum _{i=1}^{n}\mathbf {X} _{i}}

and the average is

1ni=1nXi=1n[i=1nXi(1)i=1nXi(k)]=[Xˉi(1)Xˉi(k)]=Xˉn1ni=1nXi=1n[i=1nXi(1)i=1nXi(k)]=[Xˉi(1)Xˉi(k)]=Xˉn{\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}\mathbf {X} _{i}={\frac {1}{n}}{\begin{bmatrix}\sum _{i=1}^{n}X_{i(1)}\\\vdots \\\sum _{i=1}^{n}X_{i(k)}\end{bmatrix}}={\begin{bmatrix}{\bar {X}}_{i(1)}\\\vdots \\{\bar {X}}_{i(k)}\end{bmatrix}}=\mathbf {{\bar {X}}_{n}} } {\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}\mathbf {X} _{i}={\frac {1}{n}}{\begin{bmatrix}\sum _{i=1}^{n}X_{i(1)}\\\vdots \\\sum _{i=1}^{n}X_{i(k)}\end{bmatrix}}={\begin{bmatrix}{\bar {X}}_{i(1)}\\\vdots \\{\bar {X}}_{i(k)}\end{bmatrix}}=\mathbf {{\bar {X}}_{n}} }
and therefore

1ni=1n[XiE(Xi)]=1ni=1n(Xiμ)=n(Xnμ).1ni=1n[XiE(Xi)]=1ni=1n(Xiμ)=n(Xnμ).{\displaystyle {\frac {1}{\sqrt {n}}}\sum _{i=1}^{n}\left[\mathbf {X} _{i}-\operatorname {E} \left(X_{i}\right)\right]={\frac {1}{\sqrt {n}}}\sum _{i=1}^{n}(\mathbf {X} _{i}-{\boldsymbol {\mu }})={\sqrt {n}}\left({\overline {\mathbf {X} }}_{n}-{\boldsymbol {\mu }}\right).} {\displaystyle {\frac {1}{\sqrt {n}}}\sum _{i=1}^{n}\left[\mathbf {X} _{i}-\operatorname {E} \left(X_{i}\right)\right]={\frac {1}{\sqrt {n}}}\sum _{i=1}^{n}(\mathbf {X} _{i}-{\boldsymbol {\mu }})={\sqrt {n}}\left({\overline {\mathbf {X} }}_{n}-{\boldsymbol {\mu }}\right).}
The multivariate central limit theorem states that
n(Xnμ) D Nk(0,Σ){\displaystyle {\sqrt {n}}\left({\overline {\mathbf {X} }}_{n}-{\boldsymbol {\mu }}\right)\ {\stackrel {D}{\rightarrow }}\ N_{k}(0,{\boldsymbol {\Sigma }})}

------------------------the above are from wikipedia-------------------------------

So,for②,we can get
YiNs1(0,Is1)Y_i\sim N_{s-1}(\bold0,I_{s-1})③
where
0=[0,0,,0]T\bold0=[0,0,\dots,0]^T
Is1=E(s1)(s1)I_{s-1}=E_{(s-1)·(s-1)}
then for ①

i=1i=rj=1j=s[XijNi(Njn)]2Ni(Njn)i=1i=rYi2\sum_{i=1}^{i=r} \sum_{j=1}^{j=s}\frac{[X_{ij}-N_{i·}(\frac{N_{·j}}{n})]^2}{N_{i·}(\frac{N_{·j}}{n})}\\ =\sum_{i=1}^{i=r}Y_i^2

Because of ③,
i=1i=rYi2χ2[(s1)(r1)]\sum_{i=1}^{i=r}Y_i^2\sim\chi^2[(s-1)(r-1)]

The Chi-Square statistics was invented by Pearson[8].
Reference:
[1]https://en.wikipedia.org/wiki/Multivariate_normal_distribution
[2]《Seven different proofs for the Pearson independence test》
[3]https://en.wikipedia.org/wiki/Central_limit_theorem
[4]https://ocw.mit.edu/courses/mathematics/18-443-statistics-for-applications-fall-2003/lecture-notes/lec23.pdf
[5]https://arxiv.org/pdf/1808.09171.pdf
[6]https://www.math.utah.edu/~davar/ps-pdf-files/Chisquared.pdf
[7]http://personal.psu.edu/drh20/asymp/fall2006/lectures/ANGELchpt07.pdf
[8]https://download.csdn.net/download/appleyuchi/10834144

没有更多推荐了,返回首页