1 小样本与大样本数据的比较
采用小样本数据估计线性模型参数存在如下缺陷:
-
小样本要求严格外生性,即解释变量与任意时期扰动项均不相关(因为要求过于严格,故称为严格外生性假设);而大样本只要求解释变量与本期的扰动项不相关即可
-
小样本必须知道精确分布,分布推导过于繁琐;大样本的分布具有渐进性,推导容易
-
搜寻成本上,小样本的成本低于大样本;对于大样本的界定模糊(因为大有样本理论上要求 n → ∞ n\to \infty n→∞)
2 大样本OLS假定
2.1 线性假定
线性回归模型满足如下线性形式
y
i
=
x
i
′
β
+
ε
i
y_i = \boldsymbol x_i^{\prime}\boldsymbol \beta + \varepsilon_i
yi=xi′β+εi
2.2 渐进独立平稳过程
随机过程 { y i , x i } \{y_i,x_i\} {yi,xi}在较长的时间点上存在独立性。也就是随机变量在较短的时间间隔允许存在相关性(非独立),但在较长时间内不存在“长记忆”。今天你的行为会影响明天的你结果,但不会影响十年后你的表现。
2.3 预定解释变量
所有解释变量与同期的扰动项正交,即
E
(
x
i
k
ε
i
)
=
0
,
∀
i
,
k
E(x_{ik}\varepsilon_i) = 0,\forall i,k
E(xikεi)=0,∀i,k。其中
C
o
v
(
x
i
k
ε
i
)
=
E
(
x
i
k
ε
i
)
−
E
(
x
i
k
)
E
(
ε
i
)
=
0
Cov(x_{ik}\varepsilon_i ) = E(x_{ik}\varepsilon_i) - E(x_{ik})E(\varepsilon_i) = 0
Cov(xikεi)=E(xikεi)−E(xik)E(εi)=0
其中
i
i
i表示时期,下角标
k
k
k为变量标识。定义
i
i
i时期(次观测)向量:
g
i
=
x
i
ε
i
=
(
x
i
1
⋮
x
i
K
)
ε
i
\boldsymbol g_i = \boldsymbol x_i \varepsilon_i = \left( \begin{gathered} {x_{i1}} \\ \vdots \\ {x_{iK}} \\ \end{gathered} \right){\varepsilon _i}
gi=xiεi=⎝⎜⎜⎛xi1⋮xiK⎠⎟⎟⎞εi
其中
E
(
g
i
)
=
E
(
x
i
ε
i
)
=
0
K
×
1
E(\boldsymbol g_i) = E(\boldsymbol x_i \varepsilon_i) = \boldsymbol 0_{K\times1 }
E(gi)=E(xiεi)=0K×1。假定所有解释变量与同期的扰动项正交,而与扰动项的过去与未来可以不正交。注:小样本OLS的严格外生性假定更强,即
E
(
x
i
k
ε
j
)
=
0
,
∀
j
,
k
E(x_{ik}\varepsilon_j) = 0,\forall j,k
E(xikεj)=0,∀j,k
2.4 满秩条件
为防止出现(完全)多重共线性,矩阵 E ( x i x i ′ ) K × K E(x_i x_i^{\prime})_{K \times K} E(xixi′)K×K满秩(非退化矩阵),逆矩阵 E ( x i x i ′ ) − 1 E(x_i x_i^{\prime})^{-1} E(xixi′)−1存在,且 E ( x i x i ′ ) E(x_i x_i^{\prime}) E(xixi′)为正定对称阵。
2.5 鞅差分序列
g
i
\boldsymbol g_i
gi为鞅差分序列,即
E
(
g
i
∣
g
i
−
1
,
⋯
g
1
)
=
0
E(\boldsymbol g_i | \boldsymbol g_{i-1},\cdots \boldsymbol g_1) =0
E(gi∣gi−1,⋯g1)=0,其协方差矩阵为
S
=
E
(
g
i
g
i
′
)
=
E
(
x
i
ε
i
ε
i
′
x
i
′
)
=
E
(
ε
i
2
x
i
x
i
′
)
\boldsymbol S = E(\boldsymbol g_i \boldsymbol g_i^{\prime}) = E(\boldsymbol x_i \varepsilon_i \varepsilon_i^{\prime} \boldsymbol x_i^{\prime} ) = E(\varepsilon_i ^2 \boldsymbol x_i \boldsymbol x_i^{\prime})
S=E(gigi′)=E(xiεiεi′xi′)=E(εi2xixi′)
大样本OLS不需要“严格外生性”与“正态随机扰动项”与假设,更符合实际情况。
2 大样本OLS估计量推导
设多元线性回归模型
Y
i
=
b
0
+
b
1
x
1
i
+
⋯
+
b
k
x
k
i
+
ε
i
Y_i = b_0 +b_1x_{1i}+\cdots+b_kx_{ki}+\varepsilon_i
Yi=b0+b1x1i+⋯+bkxki+εi
通过
O
L
S
OLS
OLS得该模型的参数估计量为
b
=
(
X
′
X
)
−
1
X
′
Y
\boldsymbol b = (\boldsymbol X^{\prime} \boldsymbol X)^{-1} \boldsymbol X^{\prime} \boldsymbol Y
b=(X′X)−1X′Y
其中
X
=
(
1
x
11
⋯
x
k
1
1
x
12
⋯
x
k
2
⋮
⋮
⋮
1
x
1
n
⋯
x
k
n
)
,
Y
=
(
y
1
y
2
⋮
y
n
)
\boldsymbol X = \left(\begin{gathered} 1&x_{11}&\cdots& x_{k1} \\ 1&x_{12}&\cdots& x_{k2} \\ \vdots&\vdots& &\vdots \\ 1&x_{1n}&\cdots& x_{kn} \\ \end{gathered} \right),\boldsymbol Y = \left(\begin{gathered} y_1 \\ y_2\\ \vdots \\ y_n \end{gathered} \right)
X=⎝⎜⎜⎜⎜⎜⎛11⋮1x11x12⋮x1n⋯⋯⋯xk1xk2⋮xkn⎠⎟⎟⎟⎟⎟⎞,Y=⎝⎜⎜⎜⎜⎜⎛y1y2⋮yn⎠⎟⎟⎟⎟⎟⎞
为了得到大样本估计的参数估计量先将数据矩阵写为列向量
X
=
(
1
x
11
⋯
x
k
1
1
x
12
⋯
x
k
2
⋮
⋮
⋮
1
x
1
n
⋯
x
k
n
)
=
(
x
1
′
x
2
′
⋮
x
n
′
)
\boldsymbol X = \left(\begin{gathered} 1&x_{11}&\cdots& x_{k1} \\ 1&x_{12}&\cdots& x_{k2} \\ \vdots&\vdots& &\vdots \\ 1&x_{1n}&\cdots& x_{kn} \\ \end{gathered} \right) =\left(\begin{gathered} \boldsymbol x_{1}^{\prime} \\ \boldsymbol x_{2}^{\prime} \\ \vdots \\ \boldsymbol x_{n}^{\prime} \\ \end{gathered} \right)
X=⎝⎜⎜⎜⎜⎜⎛11⋮1x11x12⋮x1n⋯⋯⋯xk1xk2⋮xkn⎠⎟⎟⎟⎟⎟⎞=⎝⎜⎜⎜⎜⎜⎛x1′x2′⋮xn′⎠⎟⎟⎟⎟⎟⎞
其中
x
i
=
(
1
,
x
1
i
,
⋯
x
k
i
)
′
\boldsymbol x_i = (1,x_{1i},\cdots x_{ki})^{\prime}
xi=(1,x1i,⋯xki)′为行观测向量,故
X
′
X
=
(
x
1
x
2
⋯
x
n
)
(
x
1
′
x
2
′
⋮
x
n
′
)
=
x
1
x
1
′
+
⋯
+
x
n
x
n
′
=
∑
i
=
1
n
x
i
x
i
′
(1)
\boldsymbol X^{\prime} \boldsymbol X = (\boldsymbol x_{1} \;\; \boldsymbol x_{2}\;\; \cdots \;\; \boldsymbol x_{n}) \left(\begin{gathered} \boldsymbol x_{1}^{\prime} \\ \boldsymbol x_{2}^{\prime} \\ \vdots \\ \boldsymbol x_{n}^{\prime} \\ \end{gathered} \right)= \boldsymbol x_1 \boldsymbol x_1^{\prime} + \cdots+\boldsymbol x_n \boldsymbol x_n^{\prime}=\sum_{i=1}^n \boldsymbol x_i \boldsymbol x_i^{\prime} \tag{1}
X′X=(x1x2⋯xn)⎝⎜⎜⎜⎜⎜⎛x1′x2′⋮xn′⎠⎟⎟⎟⎟⎟⎞=x1x1′+⋯+xnxn′=i=1∑nxixi′(1)
其次
X
′
Y
=
(
x
1
x
2
⋯
x
n
)
(
y
1
y
2
⋮
y
n
)
=
x
1
y
1
+
⋯
+
x
n
y
n
=
∑
i
=
1
n
x
i
y
i
(2)
\boldsymbol X^{\prime} \boldsymbol Y = (\boldsymbol x_{1} \;\; \boldsymbol x_{2}\;\; \cdots \;\; \boldsymbol x_{n}) \left(\begin{gathered} y_1 \\ y_2\\ \vdots \\ y_n \end{gathered} \right) =\boldsymbol x_{1} y_1+\cdots+\boldsymbol x_{n} y_n =\sum_{i=1}^{n} \boldsymbol x_{i} y_i \tag{2}
X′Y=(x1x2⋯xn)⎝⎜⎜⎜⎜⎜⎛y1y2⋮yn⎠⎟⎟⎟⎟⎟⎞=x1y1+⋯+xnyn=i=1∑nxiyi(2)
参数向量
b
\boldsymbol b
b可写作
b
=
(
X
′
X
)
−
1
X
′
Y
=
(
X
′
X
n
)
−
1
X
′
Y
n
\boldsymbol b = (\boldsymbol X^{\prime} \boldsymbol X)^{-1} \boldsymbol X^{\prime} \boldsymbol Y = (\frac{\boldsymbol X^{\prime} \boldsymbol X}{n})^{-1} \frac{\boldsymbol X^{\prime} \boldsymbol Y}{n}
b=(X′X)−1X′Y=(nX′X)−1nX′Y
其中
n
n
n为样本观测。将(1)(2)带入上式得
b
=
(
∑
i
=
1
n
x
i
x
i
′
n
)
−
1
∑
i
=
1
n
x
i
y
i
n
\boldsymbol b = (\frac{\sum_{i=1}^n \boldsymbol x_i \boldsymbol x_i^{\prime}}{n})^{-1} \frac{\sum_{i=1}^{n} \boldsymbol x_{i} y_i}{n}
b=(n∑i=1nxixi′)−1n∑i=1nxiyi
记
S
X
X
=
1
n
∑
i
=
1
n
x
i
x
i
′
S_{XX} = \frac{1}{n}\sum_{i=1}^n \boldsymbol x_i \boldsymbol x_i^{\prime}
SXX=n1∑i=1nxixi′,
S
X
Y
=
1
n
∑
i
=
1
n
x
i
y
i
S_{XY} = \frac{1}{n}\sum_{i=1}^{n} \boldsymbol x_{i} y_i
SXY=n1∑i=1nxiyi,则向量
b
=
S
X
X
−
1
S
X
Y
\boldsymbol b =S_{XX}^{-1} S_{XY}
b=SXX−1SXY
特殊地,当
k
=
1
k=1
k=1时一元线性回归方程其参数估计结果为
b
1
=
∑
i
=
1
n
(
x
1
i
−
x
ˉ
1
i
)
(
y
i
−
y
ˉ
i
)
∑
i
=
1
n
x
1
i
2
=
S
X
Y
S
X
X
b_1 = \frac {\sum_{i=1}^n(x_{1i} - \bar{x}_{1i})(y_i - \bar{y}_i)}{\sum _{i = 1}^nx_{1i}^{2}} = \frac{S_{XY}}{S_{XX}}
b1=∑i=1nx1i2∑i=1n(x1i−xˉ1i)(yi−yˉi)=SXXSXY
即一元线性回归方程的斜率等于解释变量与被解释变量的协方差和解释变量的方差之比,而对于多元线性回归方程,其参数也可表示为数据矩阵的方差的逆矩阵与数据矩阵和被解释变量之间的协方差矩阵之积。
3 大样本OLS估计量性质
3.1 一致性
-
b
\boldsymbol b
b为一致估计量,即
p
lim
n
→
∞
b
=
β
p \mathop {\lim }\limits_{n \to \infty } \boldsymbol b = \boldsymbol \beta
pn→∞limb=β,其中
β
\boldsymbol \beta
β为总体参数。下面是简单证明
b − β = ( X X ) − 1 X ′ ε = ( X ′ X n ) − 1 X ′ ε n = ( ∑ i = 1 n x i x i ′ n ) − 1 ( ∑ i = 1 n x i ε i n ) = S X X − 1 g ‾ \boldsymbol{b}-\boldsymbol{\beta}=(\boldsymbol{X} \boldsymbol{X})^{-1} \boldsymbol{X}^{\prime} \boldsymbol{\varepsilon}=\left(\frac{\boldsymbol{X}^{\prime} \boldsymbol{X}}{n}\right)^{-1} \frac{\boldsymbol{X}^{\prime} \boldsymbol{\varepsilon}}{n}=\left(\frac{\sum_{i=1}^{n} \boldsymbol{x}_{i} \boldsymbol{x}_{i}^{\prime}}{n}\right)^{-1}\left(\frac{\sum_{i=1}^{n} \boldsymbol{x}_{i} \varepsilon_{i}}{n}\right)=\boldsymbol{S}_{X X}^{-1} \overline{\boldsymbol{g}} b−β=(XX)−1X′ε=(nX′X)−1nX′ε=(n∑i=1nxixi′)−1(n∑i=1nxiεi)=SXX−1g
根据假定2.2
S X X ≡ 1 n ∑ i = 1 n x i x i ′ ⟶ p E ( x i x i ′ ) \boldsymbol{S}_{X X} \equiv \frac{1}{n} \sum_{i=1}^{n} \boldsymbol{x}_{i} \boldsymbol{x}_{i}^{\prime} \stackrel{p}{\longrightarrow} \mathrm{E}\left(\boldsymbol{x}_{i} \boldsymbol{x}_{i}^{\prime}\right) SXX≡n1i=1∑nxixi′⟶pE(xixi′)
且
g ‾ ≡ 1 n ∑ i = 1 n g i ⟶ p E ( g i ) = E ( x i ε i ) = 0 \overline{\boldsymbol{g}} \equiv \frac{1}{n} \sum_{i=1}^{n} \boldsymbol{g}_{i} \stackrel{p}{\longrightarrow} \mathrm{E}\left(\boldsymbol{g}_{i}\right)=\mathrm{E}\left(\boldsymbol{x}_{i} \varepsilon_{i}\right)=\mathbf{0} g≡n1i=1∑ngi⟶pE(gi)=E(xiεi)=0
故
b − β → 0 \boldsymbol{b}-\boldsymbol{\beta} \to \mathbf{0} b−β→0
由此看出, b \boldsymbol{b} b为一致估计量前提是解释变量与同期的扰动项正交(不相关)
3.2 渐进正态性
若 g i \boldsymbol g_i gi为鞅差分序列,则
n ( b − β ) → d N ( 0 , A v a r ( b ) ) \sqrt{n}(\boldsymbol b - \boldsymbol \beta) \xrightarrow{{d}}N(0,Avar(\boldsymbol b)) n(b−β)dN(0,Avar(b))
其中
A
v
a
r
(
b
)
=
S
X
X
−
1
S
S
X
X
−
1
=
[
E
(
x
i
x
i
′
)
]
−
1
S
[
E
(
x
i
x
i
′
)
]
−
1
S
=
E
(
g
i
g
i
′
)
=
E
(
x
i
ε
i
ε
i
′
x
i
′
)
=
E
(
ε
i
2
x
i
x
i
′
)
\begin{aligned} Avar(\boldsymbol b) &=\boldsymbol S_{XX}^{-1}\boldsymbol S \boldsymbol S_{XX}^{-1}=[E(\boldsymbol x_i \boldsymbol x_i^{\prime})]^{-1} \boldsymbol S [E(\boldsymbol x_i \boldsymbol x_i^{\prime})]^{-1}\\ \boldsymbol S& = E(\boldsymbol g_i \boldsymbol g_i^{\prime}) = E(\boldsymbol x_i \varepsilon_i \varepsilon_i^{\prime} \boldsymbol x_i^{\prime} ) = E(\varepsilon_i ^2 \boldsymbol x_i \boldsymbol x_i^{\prime}) \end{aligned}
Avar(b)S=SXX−1SSXX−1=[E(xixi′)]−1S[E(xixi′)]−1=E(gigi′)=E(xiεiεi′xi′)=E(εi2xixi′)
其中
S
\boldsymbol S
S的无偏估计由样本残差和解释变量观测估计出
S
^
=
1
n
∑
i
n
e
i
2
x
i
x
i
′
\hat{\boldsymbol S} = \frac{1}{n}\sum_i^n e_i ^2 \boldsymbol x_i \boldsymbol x_i^{\prime}
S^=n1i∑nei2xixi′
故
b
\boldsymbol b
b的渐进独立协方差的无偏估计量为
A
v
a
r
^
(
b
)
=
(
1
n
∑
i
n
x
i
x
i
′
)
−
1
1
n
∑
i
n
e
i
2
x
i
x
i
′
(
1
n
∑
i
n
x
i
x
i
′
)
−
1
=
(
1
n
∑
i
n
x
i
x
i
′
)
−
1
1
n
∑
i
n
e
i
2
x
i
x
i
′
(
1
n
∑
i
n
x
i
x
i
′
)
−
1
=
1
n
(
∑
i
n
x
i
x
i
′
)
−
1
∑
i
n
e
i
2
x
i
x
i
′
(
∑
i
n
x
i
x
i
′
)
−
1
\begin{aligned} \widehat{Avar}(\boldsymbol b) &=(\frac{1}{n}\sum_i^n\boldsymbol x_i \boldsymbol x_i^{\prime} )^{-1} \frac{1}{n}\sum_i^n e_i ^2 \boldsymbol x_i \boldsymbol x_i^{\prime} (\frac{1}{n}\sum_i^n\boldsymbol x_i \boldsymbol x_i^{\prime} )^{-1}\\ & =(\frac{1}{n}\sum_i^n\boldsymbol x_i \boldsymbol x_i^{\prime} )^{-1} \frac{1}{n}\sum_i^n e_i ^2 \boldsymbol x_i \boldsymbol x_i^{\prime} (\frac{1}{n}\sum_i^n\boldsymbol x_i \boldsymbol x_i^{\prime} )^{-1}\\ & =\frac{1}{n}(\sum_i^n\boldsymbol x_i \boldsymbol x_i^{\prime} )^{-1} \sum_i^n e_i ^2 \boldsymbol x_i \boldsymbol x_i^{\prime} (\sum_i^n\boldsymbol x_i \boldsymbol x_i^{\prime} )^{-1} \end{aligned}
Avar
(b)=(n1i∑nxixi′)−1n1i∑nei2xixi′(n1i∑nxixi′)−1=(n1i∑nxixi′)−1n1i∑nei2xixi′(n1i∑nxixi′)−1=n1(i∑nxixi′)−1i∑nei2xixi′(i∑nxixi′)−1
若
S
^
\hat {\boldsymbol S}
S^为
S
\boldsymbol S
S的一致估计量,则
S
X
X
−
1
S
^
S
X
X
−
1
\boldsymbol S_{XX}^{-1}\hat{\boldsymbol S }\boldsymbol S_{XX}^{-1}
SXX−1S^SXX−1为
S
X
X
−
1
S
S
X
X
−
1
\boldsymbol S_{XX}^{-1}\boldsymbol S \boldsymbol S_{XX}^{-1}
SXX−1SSXX−1的一致估计量(证明略)
4 大样本OLS假设检验
系数检验假设:
H
0
:
β
k
=
β
ˉ
k
H_0:\beta_k = \bar \beta_k
H0:βk=βˉk
当
H
0
H_0
H0成立时,
n
(
b
k
−
β
ˉ
k
)
→
d
N
(
0
,
A
v
a
r
(
b
k
)
)
\sqrt{n}(b_k - \bar \beta_k) \xrightarrow{{d}}N(0,Avar( b_k))
n(bk−βˉk)dN(0,Avar(bk)),其中
b
k
b_k
bk为OLS估计量
b
\boldsymbol b
b的第
k
k
k个元素。定义
t
t
t统计量
t
k
≡
n
(
b
k
−
β
ˉ
k
)
Avar
^
(
b
k
)
=
b
k
−
β
ˉ
k
1
n
Avar
^
(
b
k
)
≡
b
k
−
β
ˉ
k
S
E
∗
(
b
k
)
⟶
d
N
(
0
,
1
)
(3)
t_{k} \equiv \frac{\sqrt{n}\left(b_{k}-\bar{\beta}_{k}\right)}{\sqrt{\widehat{\operatorname{Avar}}\left(b_{k}\right)}}=\frac{b_{k}-\bar{\beta}_{k}}{\sqrt{\frac{1}{n} \widehat{\operatorname{Avar}}\left(b_{k}\right)}} \equiv \frac{b_{k}-\bar{\beta}_{k}}{\mathrm{SE}^{*}\left(b_{k}\right)} \stackrel{d}{\longrightarrow} N(0,1)\tag{3}
tk≡Avar
(bk)n(bk−βˉk)=n1Avar
(bk)bk−βˉk≡SE∗(bk)bk−βˉk⟶dN(0,1)(3)
其中
S
E
∗
(
b
k
)
≡
1
n
A
v
a
r
^
(
b
k
)
=
1
n
(
S
X
X
−
1
S
^
S
X
X
−
1
)
k
k
\mathrm{SE}^{*}\left(b_{k}\right) \equiv \sqrt{\frac{1}{n} \widehat{\mathrm{Avar}}\left(b_{k}\right)}=\sqrt{\frac{1}{n}\left(S_{X X}^{-1} \hat{S} S_{X X}^{-1}\right)_{k k}}
SE∗(bk)≡n1Avar
(bk)=n1(SXX−1S^SXX−1)kk
为异方差稳健标准误,异方差稳健标准误具有两个作用
- 反映大样本估计量的标准误
- 没有球形扰动假设的干扰,可在自相关异方差存在时使用
条件同方差假设下
E
(
ε
i
2
∣
x
i
)
=
σ
2
>
0
\mathrm{E}\left(\varepsilon_{i}^{2} \mid x_{i}\right)=\sigma^{2}>0
E(εi2∣xi)=σ2>0,异方差稳健标准误还原为一般标准误
S
≡
E
(
x
i
x
i
′
ε
i
2
)
=
E
x
i
E
(
x
i
x
i
′
ε
i
2
∣
x
i
)
=
E
x
i
[
x
i
x
i
′
E
(
ε
i
2
∣
x
i
)
]
=
σ
2
E
(
x
i
x
i
′
)
S \equiv \mathrm{E}\left(x_{i} x_{i}^{\prime} \varepsilon_{i}^{2}\right)=\mathrm{E}_{x_{i}} \mathrm{E}\left(x_{i} x_{i}^{\prime} \varepsilon_{i}^{2} \mid x_{i}\right)=\mathrm{E}_{x_{i}}\left[x_{i} x_{i}^{\prime} \mathrm{E}\left(\varepsilon_{i}^{2} \mid x_{i}\right)\right]=\sigma^{2} \mathrm{E}\left(x_{i} x_{i}^{\prime}\right)
S≡E(xixi′εi2)=ExiE(xixi′εi2∣xi)=Exi[xixi′E(εi2∣xi)]=σ2E(xixi′)
由于
s
2
⟶
p
σ
2
,
S
X
X
⟶
p
E
(
x
i
x
i
′
)
s^{2} \stackrel{p}{\longrightarrow} \sigma^{2}, S_{X X} \stackrel{p}{\longrightarrow} \mathrm{E}\left(x_{i} x_{i}^{\prime}\right)
s2⟶pσ2,SXX⟶pE(xixi′),故
Avar
^
(
b
)
=
S
X
X
−
1
(
s
2
S
X
X
)
S
X
X
−
1
=
s
2
S
X
X
−
1
=
s
2
(
1
n
X
′
X
)
−
1
=
n
s
2
(
X
′
X
)
−
1
S
E
∗
(
b
k
)
=
1
n
Avar
^
(
b
k
)
=
1
n
⋅
n
s
2
(
X
′
X
)
k
k
−
1
=
s
2
(
X
′
X
)
k
k
−
1
\begin{array}{l} \widehat{\operatorname{Avar}}(\boldsymbol{b})=\boldsymbol{S}_{X X}^{-1}\left(s^{2} \boldsymbol{S}_{X X}\right) \boldsymbol{S}_{X X}^{-1}=s^{2} \boldsymbol{S}_{X X}^{-1}=s^{2}\left(\frac{1}{n} \boldsymbol{X}^{\prime} \boldsymbol{X}\right)^{-1}=n s^{2}\left(\boldsymbol{X}^{\prime} \boldsymbol{X}\right)^{-1} \\ \\ \mathrm{SE}^{*}\left(b_{k}\right)=\sqrt{\frac{1}{n} \widehat{\operatorname{Avar}}\left(b_{k}\right)}=\sqrt{\frac{1}{n} \cdot n s^{2}\left(\boldsymbol{X}^{\prime} \boldsymbol{X}\right)_{k k}^{-1}}=\sqrt{s^{2}\left(\boldsymbol{X}^{\prime} \boldsymbol{X}\right)_{k k}^{-1}} \end{array}
Avar
(b)=SXX−1(s2SXX)SXX−1=s2SXX−1=s2(n1X′X)−1=ns2(X′X)−1SE∗(bk)=n1Avar
(bk)=n1⋅ns2(X′X)kk−1=s2(X′X)kk−1
参考文献
陈强.高级计量经济学[M].高等教育出版社
更多内容,关注公众号“那由他的学习笔记”