时间序列复习 chapter 6
密码 2022年6月11日-6月13日
文章目录
- 时间序列复习 chapter 6
- Chapter 6 Multiequation Time Series Models
- Autoregressive Distributed Lag (ADL) Model
- Cross-correlation Function (CCF)
- Structural Vector Autoregression(VAR)
- Weak Stationarity
- Stationarity Conditions for VAR(1)
- Identification of Structural VAR : Cholesky Decomposition
- Bivariate VAR(1)
- Forecast Error Variance Decomposition
- k-dimensional VAR(p) Models
- VAR Models : Order Specification
- Granger Causality Tests
- A Revisit of the Motivating Example
Chapter 6 Multiequation Time Series Models
不同变量之间的关系
Autoregressive Distributed Lag (ADL) Model
自回归分布滞后模型
- When additional predictors and their lags are added to an autoregression, the result is an autoregressive distributed lag model.
- The autoregressive distributed lag model with
p
p
p lags of
y
t
y_{t}
yt and
q
q
q lags of
x
t
x_{t}
xt, denoted
ADL
(
p
,
q
)
\operatorname{ADL}(p, q)
ADL(p,q), is
y t = ϕ 0 + ϕ 1 y t − 1 + ϕ 2 y t − 2 + ⋯ + ϕ p y t − p + γ 1 x t − 1 + γ 2 x t − 2 + ⋯ + γ q x t − q + ϵ y t or Φ ( L ) y t = ϕ 0 + Γ ( L ) x t − 1 + ϵ y t \begin{aligned} &y_{t}=\phi_{0}+\phi_{1} y_{t-1}+\phi_{2} y_{t-2}+\cdots+\phi_{p} y_{t-p} \\ &+\gamma_{1} x_{t-1}+\gamma_{2} x_{t-2}+\cdots+\gamma_{q} x_{t-q}+\epsilon_{y t} \\ &\text { or } \Phi(L) y_{t}=\phi_{0}+\Gamma(L) x_{t-1}+\epsilon_{y t} \end{aligned} yt=ϕ0+ϕ1yt−1+ϕ2yt−2+⋯+ϕpyt−p+γ1xt−1+γ2xt−2+⋯+γqxt−q+ϵyt or Φ(L)yt=ϕ0+Γ(L)xt−1+ϵyt
where Φ ( L ) = ( 1 − ϕ 1 L − ⋯ − ϕ p L p ) \Phi(L)=\left(1-\phi_{1} L-\cdots-\phi_{p} L^{p}\right) Φ(L)=(1−ϕ1L−⋯−ϕpLp) and Γ ( L ) = γ 1 + γ 2 L + ⋯ + γ q L q − 1 \Gamma(L)=\gamma_{1}+\gamma_{2} L+\cdots+\gamma_{q} L^{q-1} Γ(L)=γ1+γ2L+⋯+γqLq−1 are polynomials in “L”.
More generally, we could include the contemporaneous value of
x
t
x_{t}
xt
y
t
=
ϕ
0
+
ϕ
1
y
t
−
1
+
ϕ
2
y
t
−
2
+
⋯
+
ϕ
p
y
t
−
p
+
γ
0
x
t
+
γ
1
x
t
−
1
+
γ
2
x
t
−
2
+
⋯
+
γ
q
x
t
−
q
+
ϵ
y
t
or
Φ
(
L
)
y
t
=
ϕ
0
+
Ψ
(
L
)
x
t
+
ϵ
y
t
Φ
(
L
)
=
(
1
−
ϕ
1
L
−
⋯
−
ϕ
p
L
p
)
and
Ψ
(
L
)
=
γ
0
+
γ
1
L
+
⋯
+
γ
q
L
q
\begin{gathered} y_{t}=\phi_{0}+\phi_{1} y_{t-1}+\phi_{2} y_{t-2}+\cdots+\phi_{p} y_{t-p} \\ +\gamma_{0} x_{t}+\gamma_{1} x_{t-1}+\gamma_{2} x_{t-2}+\cdots+\gamma_{q} x_{t-q}+\epsilon_{y t} \\ \text { or } \Phi(L) y_{t}=\phi_{0}+\Psi(L) x_{t}+\epsilon_{y t} \\ \Phi(L)=\left(1-\phi_{1} L-\cdots-\phi_{p} L^{p}\right) \text { and } \Psi(L)=\gamma_{0}+\gamma_{1} L+\cdots+\gamma_{q} L^{q} \end{gathered}
yt=ϕ0+ϕ1yt−1+ϕ2yt−2+⋯+ϕpyt−p+γ0xt+γ1xt−1+γ2xt−2+⋯+γqxt−q+ϵyt or Φ(L)yt=ϕ0+Ψ(L)xt+ϵytΦ(L)=(1−ϕ1L−⋯−ϕpLp) and Ψ(L)=γ0+γ1L+⋯+γqLq
(1)
{
x
t
}
\left\{x_{t}\right\}
{xt} is exogenous that evolves independently of
{
y
t
}
\left\{y_{t}\right\}
{yt} :
x
t
=
δ
0
+
δ
1
x
t
−
1
+
⋯
+
δ
r
x
t
−
r
+
ϵ
x
t
or
D
(
L
)
x
t
=
δ
0
+
ϵ
x
t
x_{t}=\delta_{0}+\delta_{1} x_{t-1}+\cdots+\delta_{r} x_{t-r}+\epsilon_{x t} \text { or } D(L) x_{t}=\delta_{0}+\epsilon_{x t}
xt=δ0+δ1xt−1+⋯+δrxt−r+ϵxt or D(L)xt=δ0+ϵxt
where
{
ϵ
x
t
}
\left\{\epsilon_{x t}\right\}
{ϵxt} is independent of
{
ϵ
y
t
}
\left\{\epsilon_{y t}\right\}
{ϵyt}.
(2)
{
y
t
}
\left\{y_{t}\right\}
{yt} and
{
x
t
}
\left\{x_{t}\right\}
{xt} are stationary.
如果都是t-1期的变量可以用来forecast
包含t期的可以用来估计动态因果效应
Cross-correlation Function (CCF)
- The cross-correlation between
y
t
y_{t}
yt and
x
t
−
k
x_{t-k}
xt−k is defined to be
ρ y x ( k ) ≡ cov ( y t , x t − k ) var ( y t ) var ( x t ) . \rho_{y x}(k) \equiv \frac{\operatorname{cov}\left(y_{t}, x_{t-k}\right)}{\sqrt{\operatorname{var}\left(y_{t}\right) \operatorname{var}\left(x_{t}\right)}} . ρyx(k)≡var(yt)var(xt)cov(yt,xt−k). - Under stationarity condition, ρ y x ( k ) \rho_{y x}(k) ρyx(k) is invariant to t t t.
- Unlike autocorrelation, ρ y x ( k ) ≠ ρ y x ( − k ) \rho_{y x}(k) \neq \rho_{y x}(-k) ρyx(k)=ρyx(−k) in general.
- Plotting each value of ρ y x ( k ) ( k ≥ 0 ) \rho_{y x}(k)(k \geq 0) ρyx(k)(k≥0) yields the cross-correlation function (CCF) or cross-correlogram.
- The examination of the sample CCF provides the same type of information as the sample ACF in an ARMA model.
使用Yule-Walker 的方式来计算CCF
ADL model
when X t X_t Xt is a white noise
- ρ y x ( k ) \rho_{yx}(k) ρyx(k) is zero until the first nonzero element of X t X_t Xt前的系数
- a spike at lag d indicates X t − d X_{t-d} Xt−d directly affects y t y_t yt
- the decay pattern for ρ y x ( k ) \rho_{yx}(k) ρyx(k) is determined by AR part of y t y_t yt
Estimation: OLS or MLE
Model diagnostics :
the estimation residuals should behave as a white noise process and are uncorrelated with { x t , x t − 1 , ⋯ } \left\{x_{t}, x_{t-1}, \cdots\right\} {xt,xt−1,⋯}.
- 如果残差不是白噪音,那么 y t y_t yt前的系数不是adequate
- 如果残差与 X t X_t Xt序列相关,那么 X t X_t Xt前的系数不是adequate
- estimate equation:ARDL
- options: constant conditional variance
- ordinary 同方差为常数
- white 异方差
- HAC 各个时期之间存在关系
- options: constant conditional variance
- AIC/SBC 确定lag
- 系数的显著性检验:Wald test-coefficient restrictions
- 重新estimate
- model diagnostics
- y t y_t yt前的系数: residual diagnostics — correlogram-Q-statistics 残差是否为白噪音
- X t X_t Xt前的系数:residual and spread as a group :CCF 残差是否与 X t X_t Xt序列相关
Wald test-coefficient restrictions ???
{ x t } \left\{x_{t}\right\} {xt} 内生怎么办呢?
Structural Vector Autoregression(VAR)
We treat both
y
t
y_{t}
yt and
x
t
x_{t}
xt as endogenous variables:
y
t
−
b
12
(
0
)
x
t
=
c
10
+
b
11
(
1
)
y
t
−
1
+
b
12
(
1
)
x
t
−
1
+
ϵ
y
t
x
t
−
b
21
(
0
)
y
t
=
c
20
+
b
21
(
1
)
y
t
−
1
+
b
22
(
1
)
x
t
−
1
+
ϵ
x
t
[
ϵ
y
t
ϵ
x
t
]
∼
i
.
i
.
d
N
(
0
,
[
σ
y
2
0
0
σ
x
2
]
)
\begin{aligned} &y_{t}-b_{12}^{(0)} x_{t}=c_{10}+b_{11}^{(1)} y_{t-1}+b_{12}^{(1)} x_{t-1}+\epsilon_{y t} \\ &x_{t}-b_{21}^{(0)} y_{t}=c_{20}+b_{21}^{(1)} y_{t-1}+b_{22}^{(1)} x_{t-1}+\epsilon_{x t} \end{aligned}\left[\begin{array}{c} \epsilon_{y t} \\ \epsilon_{x t} \end{array}\right] \stackrel{i . i . d}{\sim} N\left(0,\left[\begin{array}{cc} \sigma_{y}^{2} & 0 \\ 0 & \sigma_{x}^{2} \end{array}\right]\right)
yt−b12(0)xt=c10+b11(1)yt−1+b12(1)xt−1+ϵytxt−b21(0)yt=c20+b21(1)yt−1+b22(1)xt−1+ϵxt[ϵytϵxt]∼i.i.dN(0,[σy200σx2])
- It is a first-order structural vector autoregression(VAR), that incorporates feedback, because y t y_{t} yt and x t x_{t} xt are allowed to affect each other.
- { ϵ y t , ϵ x t } \left\{\epsilon_{y t}, \epsilon_{x t}\right\} {ϵyt,ϵxt} are called the structural shocks. If b 12 ( 0 ) ≠ 0 , ϵ x t b_{12}^{(0)} \neq 0, \epsilon_{x t} b12(0)=0,ϵxt has an indirect contemporaneous effect on y t y_{t} yt; and if b 21 ( 0 ) ≠ 0 , ϵ y t b_{21}^{(0)} \neq 0, \epsilon_{y t} b21(0)=0,ϵyt has an indirect contemporaneous effect on x t x_{t} xt.
Equivalently, we can write
[
1
−
b
12
(
0
)
−
b
21
(
0
)
1
]
[
y
t
x
t
]
=
[
c
10
c
20
]
+
[
b
11
(
1
)
b
12
(
1
)
b
21
(
1
)
b
22
(
1
)
]
[
y
t
−
1
x
t
−
1
]
+
[
ϵ
y
t
ϵ
x
t
]
or
B
0
z
t
=
c
+
B
1
z
t
−
1
+
ϵ
t
,
ϵ
t
∼
i
.
i
.
d
N
(
0
,
[
σ
y
2
0
0
σ
x
2
]
)
.
\begin{aligned} &{\left[\begin{array}{lr} 1 & -b_{12}^{(0)} \\ -b_{21}^{(0)} & 1 \end{array}\right]\left[\begin{array}{l} y_{t} \\ x_{t} \end{array}\right]=\left[\begin{array}{l} c_{10} \\ c_{20} \end{array}\right]+\left[\begin{array}{cc} b_{11}^{(1)} & b_{12}^{(1)} \\ b_{21}^{(1)} & b_{22}^{(1)} \end{array}\right]\left[\begin{array}{l} y_{t-1} \\ x_{t-1} \end{array}\right]+\left[\begin{array}{c} \epsilon_{y t} \\ \epsilon_{x t} \end{array}\right]} \\ &\quad \text { or } B_{0} z_{t}=c+B_{1} z_{t-1}+\epsilon_{t}, \epsilon_{t} \stackrel{i . i . d}{\sim} N\left(0,\left[\begin{array}{cc} \sigma_{y}^{2} & 0 \\ 0 & \sigma_{x}^{2} \end{array}\right]\right) . \end{aligned}
[1−b21(0)−b12(0)1][ytxt]=[c10c20]+[b11(1)b21(1)b12(1)b22(1)][yt−1xt−1]+[ϵytϵxt] or B0zt=c+B1zt−1+ϵt,ϵt∼i.i.dN(0,[σy200σx2]).
不能使用ols了,因为simultaneous equation bias.
We can then pre-multiply both sides by
伴随矩阵:主队掉,副取反
B
0
−
1
=
1
1
−
b
12
(
0
)
b
21
(
0
)
[
1
b
12
(
0
)
b
21
(
0
)
1
]
to give
z
t
=
B
0
−
1
c
+
B
0
−
1
B
1
z
t
−
1
+
B
0
−
1
ϵ
t
.
\begin{aligned} B_{0}^{-1}=\frac{1}{1-b_{12}^{(0)} b_{21}^{(0)}}\left[\begin{array}{cc} 1 & b_{12}^{(0)} \\ b_{21}^{(0)} & 1 \end{array}\right] \text { to give } \\ z_{t}=B_{0}^{-1} c+B_{0}^{-1} B_{1} z_{t-1}+B_{0}^{-1} \epsilon_{t} . \end{aligned}
B0−1=1−b12(0)b21(0)1[1b21(0)b12(0)1] to give zt=B0−1c+B0−1B1zt−1+B0−1ϵt.
a reduced-form model
Φ
0
≡
B
0
−
1
c
,
Φ
1
≡
B
0
−
1
B
1
,
a
t
≡
B
0
−
1
ϵ
t
,
Σ
≡
B
0
−
1
[
σ
y
2
0
0
σ
x
2
]
B
0
−
1
′
\Phi_{0} \equiv B_{0}^{-1} c, \Phi_{1} \equiv B_{0}^{-1} B_{1}, a_{t} \equiv B_{0}^{-1} \epsilon_{t}, \Sigma \equiv B_{0}^{-1}\left[\begin{array}{cc} \sigma_{y}^{2} & 0 \\ 0 & \sigma_{x}^{2} \end{array}\right] B_{0}^{-1 \prime}
Φ0≡B0−1c,Φ1≡B0−1B1,at≡B0−1ϵt,Σ≡B0−1[σy200σx2]B0−1′
z t = Φ 0 + Φ 1 z t − 1 + a t , a t = [ a 1 t a 2 t ] ∼ i . i . d N ( 0 , Σ ) . z_{t}=\Phi_{0}+\Phi_{1} z_{t-1}+a_{t}, \quad a_{t}=\left[\begin{array}{l} a_{1 t} \\ a_{2 t} \end{array}\right] \stackrel{i . i . d}{\sim} N(0, \Sigma) . zt=Φ0+Φ1zt−1+at,at=[a1ta2t]∼i.i.dN(0,Σ).
- Eq.(7) is a first-order vector autoregression or VAR ( 1 ) \operatorname{VAR}(1) VAR(1), that can be estimated by OLS.
- In the econometric literature, the VAR ( 1 ) \operatorname{VAR}(1) VAR(1) model is also called a reduced-form model because it does not show explicitly the concurrent dependence between y t y_{t} yt and x t x_{t} xt.
- In general, a 1 t a_{1 t} a1t and a 2 t a_{2 t} a2t are correlated.
- cov ( a 1 t , a 2 t ) = 0 \operatorname{cov}\left(a_{1 t}, a_{2 t}\right)=0 cov(a1t,a2t)=0, if b 12 ( 0 ) = b 21 ( 0 ) = 0 b_{12}^{(0)}=b_{21}^{(0)}=0 b12(0)=b21(0)=0.
- Σ \Sigma Σ是一个对称矩阵,所以参数在这里会减少
Weak Stationarity
The first moments and the second moments of
z
t
z_{t}
zt are time-invariant.
Both
E
(
z
t
)
=
[
E
(
y
t
)
E
(
x
t
)
]
≡
μ
and
var
(
z
t
)
=
[
var
(
y
t
)
cov
(
y
t
,
x
t
)
cov
(
x
t
,
y
t
)
var
(
x
t
)
]
≡
Γ
0
=
[
Γ
11
(
0
)
Γ
12
(
0
)
Γ
21
(
0
)
Γ
22
(
0
)
]
\begin{gathered} \text { Both } E\left(z_{t}\right)=\left[\begin{array}{c} E\left(y_{t}\right) \\ E\left(x_{t}\right) \end{array}\right] \equiv \mu \quad \text { and } \\ \operatorname{var}\left(z_{t}\right)=\left[\begin{array}{cc} \operatorname{var}\left(y_{t}\right) & \operatorname{cov}\left(y_{t}, x_{t}\right) \\ \operatorname{cov}\left(x_{t}, y_{t}\right) & \operatorname{var}\left(x_{t}\right) \end{array}\right] \equiv \Gamma_{0}=\left[\begin{array}{cc} \Gamma_{11}(0) & \Gamma_{12}(0) \\ \Gamma_{21}(0) & \Gamma_{22}(0) \end{array}\right] \end{gathered}
Both E(zt)=[E(yt)E(xt)]≡μ and var(zt)=[var(yt)cov(xt,yt)cov(yt,xt)var(xt)]≡Γ0=[Γ11(0)Γ21(0)Γ12(0)Γ22(0)]
are time invariant.
Lag-k cross-covariance matrices of
z
t
z_{t}
zt :
Γ
k
=
[
Γ
11
(
k
)
Γ
12
(
k
)
Γ
21
(
k
)
Γ
22
(
k
)
]
≡
cov
(
z
t
,
z
t
−
k
)
=
[
cov
(
y
t
,
y
t
−
k
)
cov
(
y
t
,
x
t
−
k
)
cov
(
x
t
,
y
t
−
k
)
cov
(
x
t
,
x
t
−
k
)
]
\begin{aligned} \Gamma_{k} &=\left[\begin{array}{ll} \Gamma_{11}(k) & \Gamma_{12}(k) \\ \Gamma_{21}(k) & \Gamma_{22}(k) \end{array}\right] \equiv \operatorname{cov}\left(z_{t}, z_{t-k}\right) \\ &=\left[\begin{array}{cc} \operatorname{cov}\left(y_{t}, y_{t-k}\right) & \operatorname{cov}\left(y_{t}, x_{t-k}\right) \\ \operatorname{cov}\left(x_{t}, y_{t-k}\right) & \operatorname{cov}\left(x_{t}, x_{t-k}\right) \end{array}\right] \end{aligned}
Γk=[Γ11(k)Γ21(k)Γ12(k)Γ22(k)]≡cov(zt,zt−k)=[cov(yt,yt−k)cov(xt,yt−k)cov(yt,xt−k)cov(xt,xt−k)]
- For a weakly stationary series, Γ k \Gamma_{k} Γk is invariant to t t t.
- Γ k \Gamma_{k} Γk is NOT symmetric if k ≠ 0 k \neq 0 k=0. Consider Γ 1 \Gamma_{1} Γ1 :
Let the diagonal matrix D be
D
≡
[
std
(
y
t
)
0
0
std
(
x
t
)
]
=
[
Γ
11
(
0
)
0
0
Γ
22
(
0
)
]
D \equiv\left[\begin{array}{cc} \operatorname{std}\left(y_{t}\right) & 0 \\ 0 & \operatorname{std}\left(x_{t}\right) \end{array}\right]=\left[\begin{array}{cc} \sqrt{\Gamma_{11}(0)} & 0 \\ 0 & \sqrt{\Gamma_{22}(0)} \end{array}\right]
D≡[std(yt)00std(xt)]=[Γ11(0)00Γ22(0)]
The concurrent cross-correlation matrix:
ρ
0
≡
corr
(
z
t
,
z
t
)
=
[
1
corr
(
y
t
,
x
t
)
corr
(
x
t
,
y
t
)
1
]
=
D
−
1
Γ
0
D
−
1
\rho_{0} \equiv \operatorname{corr}\left(z_{t}, z_{t}\right)=\left[\begin{array}{cc} 1 & \operatorname{corr}\left(y_{t}, x_{t}\right) \\ \operatorname{corr}\left(x_{t}, y_{t}\right) & 1 \end{array}\right]=D^{-1} \Gamma_{0} D^{-1}
ρ0≡corr(zt,zt)=[1corr(xt,yt)corr(yt,xt)1]=D−1Γ0D−1
Lag-k cross-correlation matrix (CCM) :
ρ
k
=
[
ρ
11
(
k
)
ρ
12
(
k
)
ρ
21
(
k
)
ρ
22
(
k
)
]
≡
corr
(
z
t
,
z
t
−
k
)
=
[
corr
(
y
t
,
y
t
−
k
)
corr
(
y
t
,
x
t
−
k
)
corr
(
x
t
,
y
t
−
k
)
corr
(
x
t
,
x
t
−
k
)
]
=
D
−
1
Γ
k
D
−
1
\begin{aligned} \rho_{k} &=\left[\begin{array}{ll} \rho_{11}(k) & \rho_{12}(k) \\ \rho_{21}(k) & \rho_{22}(k) \end{array}\right] \equiv \operatorname{corr}\left(z_{t}, z_{t-k}\right) \\ &=\left[\begin{array}{ll} \operatorname{corr}\left(y_{t}, y_{t-k}\right) & \operatorname{corr}\left(y_{t}, x_{t-k}\right) \\ \operatorname{corr}\left(x_{t}, y_{t-k}\right) & \operatorname{corr}\left(x_{t}, x_{t-k}\right) \end{array}\right]=D^{-1} \Gamma_{k} D^{-1} \end{aligned}
ρk=[ρ11(k)ρ21(k)ρ12(k)ρ22(k)]≡corr(zt,zt−k)=[corr(yt,yt−k)corr(xt,yt−k)corr(yt,xt−k)corr(xt,xt−k)]=D−1ΓkD−1
Estimate CCM : Sample Cross-Correlation Matrices
把前面的公式里的值用估计量替代
- The cross-covariance matrix Γ k \Gamma_{k} Γk can be estimated by Γ ^ k = 1 T ∑ t = k + 1 T ( z t − z ˉ ) ( z t − k − z ˉ ) ′ \widehat{\Gamma}_{k}=\frac{1}{T} \sum_{t=k+1}^{T}\left(z_{t}-\bar{z}\right)\left(z_{t-k}-\bar{z}\right)^{\prime} Γ k=T1∑t=k+1T(zt−zˉ)(zt−k−zˉ)′, for k ≥ 0 k \geq 0 k≥0, where z ˉ = ( ∑ t = 1 T z t ) / T \bar{z}=\left(\sum_{t=1}^{T} z_{t}\right) / T zˉ=(∑t=1Tzt)/T is the vector of sample means.
- The cross-correlation matrix ρ k \rho_{k} ρk is estimated by ρ ^ k = D ^ − 1 Γ ^ k D ^ − 1 \widehat{\rho}_{k}=\widehat{D}^{-1} \widehat{\Gamma}_{k} \widehat{D}^{-1} ρ k=D −1Γ kD −1, for k ≥ 0 k \geq 0 k≥0, where D ^ \widehat{D} D is the 2 × 2 2 \times 2 2×2 diagonal matrix of the sample standard deviations of the component series.
Stationarity Conditions for VAR(1)
Iteration from
z
t
z_{t}
zt back to
z
0
z_{0}
z0 yields
z
t
=
∑
j
=
0
t
−
1
Φ
1
j
Φ
0
+
Φ
1
t
z
0
+
∑
j
=
0
t
−
1
Φ
1
j
a
t
−
j
z_{t}=\sum_{j=0}^{t-1} \Phi_{1}^{j} \Phi_{0}+\Phi_{1}^{t} z_{0}+\sum_{j=0}^{t-1} \Phi_{1}^{j} a_{t-j}
zt=j=0∑t−1Φ1jΦ0+Φ1tz0+j=0∑t−1Φ1jat−j
where
(
Φ
1
)
0
=
I
2
\left(\Phi_{1}\right)^{0}=I_{2}
(Φ1)0=I2 - the identity matrix. Continuing to iterate backward another
n
\mathrm{n}
n periods, we obtain
z
t
=
∑
j
=
0
t
+
n
−
1
Φ
1
j
Φ
0
+
Φ
1
t
+
n
z
−
n
+
∑
j
=
0
t
+
n
−
1
Φ
1
j
a
t
−
j
z_{t}=\sum_{j=0}^{t+n-1} \Phi_{1}^{j} \Phi_{0}+\Phi_{1}^{t+n} z_{-n}+\sum_{j=0}^{t+n-1} \Phi_{1}^{j} a_{t-j}
zt=j=0∑t+n−1Φ1jΦ0+Φ1t+nz−n+j=0∑t+n−1Φ1jat−j
Stability requires that
lim
n
→
∞
Φ
1
n
=
0
\lim _{n \rightarrow \infty} \Phi_{1}^{n}=0
limn→∞Φ1n=0, where
Φ
1
=
[
ϕ
11
ϕ
12
ϕ
21
ϕ
22
]
\Phi_{1}=\left[\begin{array}{ll}\phi_{11} & \phi_{12} \\ \phi_{21} & \phi_{22}\end{array}\right]
Φ1=[ϕ11ϕ21ϕ12ϕ22].
- Eigenvalues of Φ 1 \Phi_{1} Φ1 are all smaller than 1 in absolute value.
- Roots of det ( Φ 1 − α I ) = 0 \operatorname{det}\left(\Phi_{1}-\alpha I\right)=0 det(Φ1−αI)=0 are all smaller than 1 in modulus.
- Roots of det ( I − Φ 1 λ ) = 0 \operatorname{det}\left(I-\Phi_{1} \lambda\right)=0 det(I−Φ1λ)=0 are all larger than 1 in modulus.
- Roots of ( 1 − ϕ 11 λ ) ( 1 − ϕ 22 λ ) − ϕ 12 ϕ 21 λ 2 = 0 \left(1-\phi_{11} \lambda\right)\left(1-\phi_{22} \lambda\right)-\phi_{12} \phi_{21} \lambda^{2}=0 (1−ϕ11λ)(1−ϕ22λ)−ϕ12ϕ21λ2=0 lie outside the unit circle.
stable+发生很久或者一直在均衡方程可以得到stationary
z
t
=
Φ
0
+
Φ
1
z
t
−
1
+
a
t
,
a
t
=
[
a
1
t
a
2
t
]
∼
i
.
i
.
d
N
(
0
,
Σ
)
.
z
t
=
∑
j
=
0
t
−
1
Φ
1
j
Φ
0
+
Φ
1
t
z
0
+
∑
j
=
0
t
−
1
Φ
1
j
a
t
−
j
z_{t}=\Phi_{0}+\Phi_{1} z_{t-1}+a_{t}, \quad a_{t}=\left[\begin{array}{l} a_{1 t} \\ a_{2 t} \end{array}\right] \stackrel{i . i . d}{\sim} N(0, \Sigma) .\\ z_{t}=\sum_{j=0}^{t-1} \Phi_{1}^{j} \Phi_{0}+\Phi_{1}^{t} z_{0}+\sum_{j=0}^{t-1} \Phi_{1}^{j} a_{t-j}
zt=Φ0+Φ1zt−1+at,at=[a1ta2t]∼i.i.dN(0,Σ).zt=j=0∑t−1Φ1jΦ0+Φ1tz0+j=0∑t−1Φ1jat−j
If
z
t
z_{t}
zt is stationary
- μ = E ( z t ) = ( I − Φ 1 ) − 1 Φ 0 \mu=E\left(z_{t}\right)=\left(I-\Phi_{1}\right)^{-1} \Phi_{0} μ=E(zt)=(I−Φ1)−1Φ0
- Γ 0 = var ( z t ) = ∑ i = 0 ∞ Φ 1 i Σ ( Φ 1 i ) ′ \Gamma_{0}=\operatorname{var}\left(z_{t}\right)=\sum_{i=0}^{\infty} \Phi_{1}^{i} \Sigma\left(\Phi_{1}^{i}\right)^{\prime} Γ0=var(zt)=∑i=0∞Φ1iΣ(Φ1i)′ (in Eq. (12), let n → ∞ ) \left.n \rightarrow \infty\right) n→∞).
- Γ k = Φ 1 Γ k − 1 \Gamma_{k}=\Phi_{1} \Gamma_{k-1} Γk=Φ1Γk−1, for k ≥ 1 k \geq 1 k≥1.
- ρ k = Υ ρ k − 1 \rho_{k}=\Upsilon \rho_{k-1} ρk=Υρk−1, for k ≥ 1 k \geq 1 k≥1, where Υ = D − 1 Φ 1 D \Upsilon=D^{-1} \Phi_{1} D Υ=D−1Φ1D.
estimate each equation separately by OLS or seemingly unrelated regression (SUR).
- SUR:似不相关回归
- 一类为“联立方程组”(simultaneous equations),即不同方程间存在内在联系,一个方程的解释变量是 另一方程的被解释变量。
- 另一类为“似不相关回归”(Seemingly Unrelated Regression Estimation,简记 SUR 或 SURE),即各方程的变量之间没有内在 联系,但各方程的扰动项之间存在相关性
预测相当于向前迭代,求解相当于向后迭代
Identification of Structural VAR : Cholesky Decomposition
- If we could identify B 0 B_{0} B0, then we can retrieve c c c and B 1 B_{1} B1.
- Sim’s recursive ordering : one of the coefficients on the contemporaneous terms is zero (e.g. b 12 ( 0 ) = 0 b_{12}^{(0)}=0 b12(0)=0 ), i.e. B 0 B_{0} B0 and B 0 − 1 B_{0}^{-1} B0−1 are lower triangular matrices with unit diagonal elements.
- Cholesky decomposition : Σ = L G L ′ \Sigma=L G L^{\prime} Σ=LGL′, where L L L is a lower triangular matrix with unit diagonal elements and G \mathrm{G} G is a diagonal matrix.
- Then we have B 0 = L − 1 , c = L − 1 Φ 0 B_{0}=L^{-1}, c=L^{-1} \Phi_{0} B0=L−1,c=L−1Φ0, B 1 = L − 1 Φ 1 , ϵ t = L − 1 a t B_{1}=L^{-1} \Phi_{1}, \epsilon_{t}=L^{-1} a_{t} B1=L−1Φ1,ϵt=L−1at, and [ σ y 2 0 0 σ x 2 ] = G \left[\begin{array}{cc}\sigma_{y}^{2} & 0 \\ 0 & \sigma_{x}^{2}\end{array}\right]=G [σy200σx2]=G.
Bivariate VAR(1)
Consider the stationary two-variable
VAR
(
1
)
\operatorname{VAR}(1)
VAR(1) model
z
t
=
Φ
0
+
Φ
1
z
t
−
1
+
a
t
.
z_{t}=\Phi_{0}+\Phi_{1} z_{t-1}+a_{t} .
zt=Φ0+Φ1zt−1+at.
In Eq.(12), let
n
→
∞
n \rightarrow \infty
n→∞, then we can get the
VMA
(
∞
)
\operatorname{VMA}(\infty)
VMA(∞) representation
z
t
=
μ
+
∑
s
=
0
∞
Ψ
s
a
t
−
s
,
z_{t}=\mu+\sum_{s=0}^{\infty} \Psi_{s} a_{t-s} \text {, }
zt=μ+s=0∑∞Ψsat−s,
where
Ψ
s
=
Φ
1
s
\Psi_{s}=\Phi_{1}^{s}
Ψs=Φ1s and
μ
=
(
I
−
Φ
1
)
−
1
Φ
0
\mu=\left(I-\Phi_{1}\right)^{-1} \Phi_{0}
μ=(I−Φ1)−1Φ0
Eq(12)
z
t
=
∑
j
=
0
t
+
n
−
1
Φ
1
j
Φ
0
+
Φ
1
t
+
n
z
−
n
+
∑
j
=
0
t
+
n
−
1
Φ
1
j
a
t
−
j
z_{t}=\sum_{j=0}^{t+n-1} \Phi_{1}^{j} \Phi_{0}+\Phi_{1}^{t+n} z_{-n}+\sum_{j=0}^{t+n-1} \Phi_{1}^{j} a_{t-j}
zt=j=0∑t+n−1Φ1jΦ0+Φ1t+nz−n+j=0∑t+n−1Φ1jat−j
the impulse response functions
求脉冲响应函数:y,x对残差项shock的偏导数
Denote Ψ s = [ ψ 11 ( s ) ψ 12 ( s ) ψ 21 ( s ) ψ 22 ( s ) ] \Psi_{s}=\left[\begin{array}{ll}\psi_{11}(s) & \psi_{12}(s) \\ \psi_{21}(s) & \psi_{22}(s)\end{array}\right] Ψs=[ψ11(s)ψ21(s)ψ12(s)ψ22(s)]
- ψ 11 ( s ) = ∂ y t ∂ a 1 , t − s , ψ 12 ( s ) = ∂ y t ∂ a 2 , t − s \psi_{11}(s)=\frac{\partial y_{t}}{\partial a_{1, t-s}}, \psi_{12}(s)=\frac{\partial y_{t}}{\partial a_{2, t-s}} ψ11(s)=∂a1,t−s∂yt,ψ12(s)=∂a2,t−s∂yt.
- ψ 21 ( s ) = ∂ x t ∂ a 1 , t − s , ψ 22 ( s ) = ∂ x t ∂ a 2 , t − s \psi_{21}(s)=\frac{\partial x_{t}}{\partial a_{1, t-s}}, \psi_{22}(s)=\frac{\partial x_{t}}{\partial a_{2, t-s}} ψ21(s)=∂a1,t−s∂xt,ψ22(s)=∂a2,t−s∂xt.
- The four sets of coefficients ψ 11 ( s ) , ψ 12 ( s ) , ψ 21 ( s ) , ψ 22 ( s ) \psi_{11}(s), \psi_{12}(s), \psi_{21}(s), \psi_{22}(s) ψ11(s),ψ12(s),ψ21(s),ψ22(s) are called the impulse response functions.
- A plot of ψ i j ( s ) \psi_{i j}(s) ψij(s) as a function of s s s is a practical way to visually represent the behavior of ( y t , x t ) \left(y_{t}, x_{t}\right) (yt,xt) in response to the various shocks.
the impulse response functions of the structural shocks.
If we could further identify
a
t
=
B
0
−
1
ϵ
t
a_{t}=B_{0}^{-1} \epsilon_{t}
at=B0−1ϵt, where
ϵ
t
\epsilon_{t}
ϵt is defined in the structural VAR. Thus
z
t
=
μ
+
∑
s
=
0
∞
Φ
1
s
B
0
−
1
ϵ
t
−
s
z_{t}=\mu+\sum_{s=0}^{\infty} \Phi_{1}^{s} B_{0}^{-1} \epsilon_{t-s}
zt=μ+s=0∑∞Φ1sB0−1ϵt−s
Denote
Π
s
=
Φ
1
s
B
0
−
1
=
[
π
11
(
s
)
π
12
(
s
)
π
21
(
s
)
π
22
(
s
)
]
\Pi_{s}=\Phi_{1}^{s} B_{0}^{-1}=\left[\begin{array}{ll}\pi_{11}(s) & \pi_{12}(s) \\ \pi_{21}(s) & \pi_{22}(s)\end{array}\right]
Πs=Φ1sB0−1=[π11(s)π21(s)π12(s)π22(s)]
π
11
(
s
)
=
∂
y
t
∂
ϵ
y
,
t
−
s
,
π
12
(
s
)
=
∂
y
t
∂
ϵ
x
,
t
−
s
\pi_{11}(s)=\frac{\partial y_{t}}{\partial \epsilon_{y, t-s}}, \pi_{12}(s)=\frac{\partial y_{t}}{\partial \epsilon_{x, t-s}}
π11(s)=∂ϵy,t−s∂yt,π12(s)=∂ϵx,t−s∂yt.
π
21
(
s
)
=
∂
x
t
∂
ϵ
y
,
t
−
s
,
π
22
(
s
)
=
∂
x
t
∂
ϵ
x
,
t
−
s
\pi_{21}(s)=\frac{\partial x_{t}}{\partial \epsilon_{y, t-s}}, \pi_{22}(s)=\frac{\partial x_{t}}{\partial \epsilon_{x, t-s}}
π21(s)=∂ϵy,t−s∂xt,π22(s)=∂ϵx,t−s∂xt.
- The four sets of coefficients π 11 ( s ) , π 12 ( s ) , π 21 ( s ) , π 22 ( s ) \pi_{11}(s), \pi_{12}(s), \pi_{21}(s), \pi_{22}(s) π11(s),π12(s),π21(s),π22(s) are the impulse response functions of the structural shocks.
Forecast Error Variance Decomposition
预测均方误差分解
Consider j-step-ahead forecast using the VMA ( ∞ ) \operatorname{VMA}(\infty) VMA(∞) representation of the structural model Eq.(13) :
- z t + j = μ + ∑ s = 0 ∞ Π s ϵ t + j − s z_{t+j}=\mu+\sum_{s=0}^{\infty} \Pi_{s} \epsilon_{t+j-s} zt+j=μ+∑s=0∞Πsϵt+j−s
- E [ z t + j ∣ F t ] = μ + ∑ s = j ∞ Π s ϵ t + j − s E\left[z_{t+j} \mid \mathcal{F}_{t}\right]=\mu+\sum_{s=j}^{\infty} \Pi_{s} \epsilon_{t+j-s} E[zt+j∣Ft]=μ+∑s=j∞Πsϵt+j−s
- j-step-ahead forecast error e t ( j ) = ∑ s = 0 j − 1 Π s ϵ t + j − s e_{t}(j)=\sum_{s=0}^{j-1} \Pi_{s} \epsilon_{t+j-s} et(j)=∑s=0j−1Πsϵt+j−s Therefore,
- j-step-ahead forecast error for
y
t
+
j
y_{t+j}
yt+j is
∑ s = 0 j − 1 π 11 ( s ) ϵ y , t + j − s + ∑ s = 0 j − 1 π 12 ( s ) ϵ x , t + j − s . \sum_{s=0}^{j-1} \pi_{11}(s) \epsilon_{y, t+j-s}+\sum_{s=0}^{j-1} \pi_{12}(s) \epsilon_{x, t+j-s} . s=0∑j−1π11(s)ϵy,t+j−s+s=0∑j−1π12(s)ϵx,t+j−s. - j j j-step-ahead forecast error variance of y t + j y_{t+j} yt+j is σ y 2 ( j ) = σ y 2 ∑ s = 0 j − 1 π 11 2 ( s ) + σ x 2 ∑ s = 0 j − 1 π 12 2 ( s ) . \sigma_{y}^{2}(j)=\sigma_{y}^{2} \sum_{s=0}^{j-1} \pi_{11}^{2}(s)+\sigma_{x}^{2} \sum_{s=0}^{j-1} \pi_{12}^{2}(s) . σy2(j)=σy2∑s=0j−1π112(s)+σx2∑s=0j−1π122(s).
We could decompose the j-step-ahead forecast error variance into the proportions due to each structural shock:
- σ y 2 ∑ s = 0 j − 1 π 11 2 ( s ) σ y 2 ( j ) \frac{\sigma_{y}^{2} \sum_{s=0}^{j-1} \pi_{11}^{2}(s)}{\sigma_{y}^{2}(j)} σy2(j)σy2∑s=0j−1π112(s) due to shocks in the { ϵ y t } \left\{\epsilon_{y t}\right\} {ϵyt} sequence.
-
σ
x
2
∑
s
=
0
j
−
1
π
12
2
(
s
)
σ
y
2
(
j
)
\frac{\sigma_{x}^{2} \sum_{s=0}^{j-1} \pi_{12}^{2}(s)}{\sigma_{y}^{2}(j)}
σy2(j)σx2∑s=0j−1π122(s) due to shocks in the
{
ϵ
x
t
}
\left\{\epsilon_{x t}\right\}
{ϵxt} sequence.
The j-step-ahead forecast error variance of x t + j x_{t+j} xt+j could be analyzed similarly.
k-dimensional VAR§ Models
The k-variable
p
p
p-lag vector autoregressive model has the form
z
t
=
Φ
0
+
Φ
1
z
t
−
1
+
⋯
+
Φ
p
z
t
−
p
+
a
t
,
p
≥
1
z_{t}=\Phi_{0}+\Phi_{1} z_{t-1}+\cdots+\Phi_{p} z_{t-p}+a_{t}, \quad p \geq 1
zt=Φ0+Φ1zt−1+⋯+Φpzt−p+at,p≥1
where
Φ
0
\Phi_{0}
Φ0 is a
k
k
k-dimensional vector,
Φ
j
\Phi_{j}
Φj are
k
×
k
k \times k
k×k matrices, and
{
a
t
}
\left\{a_{t}\right\}
{at} is a sequence of serially uncorrelated random vectors with mean zero and covariance matrix
Σ
\Sigma
Σ.
-
k 2 p k^{2} p k2p coefficients plus k k k intercept terms
-
using the lag operator
(
I
k
−
Φ
1
L
−
⋯
−
Φ
p
L
p
)
z
t
=
Φ
0
+
a
t
\left(I_{k}-\Phi_{1} L-\cdots-\Phi_{p} L^{p}\right) z_{t}=\Phi_{0}+a_{t}
(Ik−Φ1L−⋯−ΦpLp)zt=Φ0+at
where
I
k
I_{k}
Ik is the
k
×
k
k \times k
k×k identity matrix.
- ==Stationarity condition :==the roots of det ( I k − Φ 1 λ − ⋯ − Φ p λ p ) = 0 \operatorname{det}\left(I_{k}-\Phi_{1} \lambda-\cdots-\Phi_{p} \lambda^{p}\right)=0 det(Ik−Φ1λ−⋯−Φpλp)=0 lie outside the unit circle.
讨论:
-
在VAR模型中所包含的变量可以根据相关的经济或金融理论进行选择,从而有助于相互预测。
-
为了获取系统中的重要信息,必须避免过度参数化和自由度损失问题。
-
如果滞后长度太小,模型就会被错误地指定;如果它太大,模型的自由度就会丢失。
VAR Models : Order Specification
- Fit VAR ( p ) \operatorname{VAR}(\mathrm{p}) VAR(p) models with orders p = 0 , 1 , ⋯ , p max p=0,1, \cdots, p_{\max } p=0,1,⋯,pmax and choose the value of p p p which minimizes some model selection criteria.
- Under VAR§ model, the residual is a ^ t ( p ) \widehat{a} t^{(p)} a t(p).
- The M L \mathrm{ML} ML estimator of Σ \Sigma Σ is Σ ^ p = 1 T ∑ t = p + 1 T a t ^ ( p ) [ a t ^ ( p ) ] ′ \widehat{\Sigma}_{p}=\frac{1}{T} \sum_{t=p+1}^{T}{\widehat{a_{t}}}^{(p)}\left[{\widehat{a_{t}}}^{(p)}\right]^{\prime} Σ p=T1∑t=p+1Tat (p)[at (p)]′.
A I C ( p ) = log ( ∣ Σ ^ p ∣ ) + 2 ( k 2 p + k ) T B I C ( p ) = log ( ∣ Σ ^ p ∣ ) + ( k 2 p + k ) log ( T ) T \begin{aligned} &A I C(p)=\log \left(\left|\widehat{\Sigma}_{p}\right|\right)+\frac{2\left(k^{2} p+k\right)}{T} \\ &B I C(p)=\log \left(\left|\widehat{\Sigma}_{p}\right|\right)+\frac{\left(k^{2} p+k\right) \log (T)}{T} \end{aligned} AIC(p)=log(∣∣∣Σ p∣∣∣)+T2(k2p+k)BIC(p)=log(∣∣∣Σ p∣∣∣)+T(k2p+k)log(T)
Granger Causality Tests
One of the main uses of VAR models is forecasting.
The following intuitive notion of a variable’s forecasting ability is due to Granger (1969).
- If z 2 z_{2} z2 does not improve the forecasting performance of z 1 z_{1} z1, then z 2 z_{2} z2 does not Granger-cause z 1 z_{1} z1.
- If z 2 z_{2} z2 improves the forecasting accuracy of z 1 z_{1} z1, then z 2 z_{2} z2 is said to Granger-cause z 1 z_{1} z1.
- The notion of Granger-causality does not imply true causality. It only implies forecasting ability.
实际操作结果
- In a bivariate
VAR
(
p
)
\operatorname{VAR}(\mathrm{p})
VAR(p) model,
z
2
z_{2}
z2 fails to Granger-cause
z
1
z_{1}
z1 if all of the
p
p
p VAR coefficient matrices
Φ
1
,
⋯
,
Φ
p
\Phi_{1}, \cdots, \Phi_{p}
Φ1,⋯,Φp are lower triangular:
( z 1 t z 2 t ) = ( ϕ 10 ϕ 20 ) + ( ϕ 11 1 0 ϕ 21 1 ϕ 22 1 ) ( z 1 , t − 1 z 2 , t − 1 ) + ⋯ + ( ϕ 11 p 0 ϕ 21 p ϕ 22 p ) ( z 1 , t − p z 2 , t − p ) + ( ϵ 1 t ϵ 2 t ) \begin{aligned} \left(\begin{array}{l} z_{1 t} \\ z_{2 t} \end{array}\right) &=\left(\begin{array}{l} \phi_{10} \\ \phi_{20} \end{array}\right)+\left(\begin{array}{cc} \phi_{11}^{1} & 0 \\ \phi_{21}^{1} & \phi_{22}^{1} \end{array}\right)\left(\begin{array}{c} z_{1, t-1} \\ z_{2, t-1} \end{array}\right)+\cdots \\ &+\left(\begin{array}{cc} \phi_{11}^{p} & 0 \\ \phi_{21}^{p} & \phi_{22}^{p} \end{array}\right)\left(\begin{array}{c} z_{1, t-p} \\ z_{2, t-p} \end{array}\right)+\left(\begin{array}{c} \epsilon_{1 t} \\ \epsilon_{2 t} \end{array}\right) \end{aligned} (z1tz2t)=(ϕ10ϕ20)+(ϕ111ϕ2110ϕ221)(z1,t−1z2,t−1)+⋯+(ϕ11pϕ21p0ϕ22p)(z1,t−pz2,t−p)+(ϵ1tϵ2t) - If z 2 z_{2} z2 fails to Granger-cause z 1 z_{1} z1 and z 1 z_{1} z1 fails to Granger-cause z 2 z_{2} z2, then the VAR coefficient matrices Φ 1 , ⋯ , Φ p \Phi_{1}, \cdots, \Phi_{p} Φ1,⋯,Φp are diagonal. Then we can model z 1 z_{1} z1 and z 2 z_{2} z2 separately.
In the bivariate model, testing
H
0
:
z
2
H_{0}: z_{2}
H0:z2 does not Granger-cause
z
1
z_{1}
z1 reduces to a testing
H
0
:
ϕ
12
1
=
ϕ
12
2
=
⋯
=
ϕ
12
p
=
0
H_{0}: \phi_{12}^{1}=\phi_{12}^{2}=\cdots=\phi_{12}^{p}=0
H0:ϕ121=ϕ122=⋯=ϕ12p=0 from the linear regression
z
1
t
=
ϕ
10
+
ϕ
11
1
z
1
,
t
−
1
+
⋯
+
ϕ
11
p
z
1
,
t
−
p
+
ϕ
12
1
z
2
,
t
−
1
+
⋯
+
ϕ
12
p
z
2
,
t
−
p
+
ϵ
1
t
\begin{aligned} z_{1 t} &=\phi_{10}+\phi_{11}^{1} z_{1, t-1}+\cdots+\phi_{11}^{p} z_{1, t-p} \\ &+\phi_{12}^{1} z_{2, t-1}+\cdots+\phi_{12}^{p} z_{2, t-p}+\epsilon_{1 t} \end{aligned}
z1t=ϕ10+ϕ111z1,t−1+⋯+ϕ11pz1,t−p+ϕ121z2,t−1+⋯+ϕ12pz2,t−p+ϵ1t
The test statistic is a simple F-statistic or Wald statistic.
Block Exogeneity
The block-exogeneity test is the multivariate generalization of the Granger causality test. For example, in a trivariate model with y t y_{t} yt, x t x_{t} xt and w t w_{t} wt :
- the test is whether lags of w t w_{t} wt Granger cause either y t y_{t} yt or x t x_{t} xt;
- the test is whether lags of w t w_{t} wt and x t x_{t} xt Granger cause y t y_{t} yt.
- Eviews commands : Estimate VAR → \rightarrow → View/Lag Structure/ Granger Causality/Block Exogeneity Tests
The block-exogeneity tests could be done using Wald test or the likelihood ratio test.
Lag Exclusion Tests
The lag exclusion test carries out lag exclusion tests for each lag in the VAR. For example, in a bivariate VAR ( 5 ) \operatorname{VAR}(5) VAR(5) model with y t y_{t} yt and x t x_{t} xt :
- the test is whether the first lag of y t y_{t} yt and x t x_{t} xt should be excluded from each equation.
- the test is whether the second lag of y t y_{t} yt and x t x_{t} xt should be excluded from each equation.
- ⋯ \cdots ⋯
- Eviews commands : Estimate VAR → \rightarrow → View/Lag Structure/ Lag Exclusion Tests
The lag exclusion tests are done using Wald test in Eviews.
Testing for Serial Dependence?
The Q k ( m ) Q_{k}(m) Qk(m) statistic can be applied to the residual series to check the assumption that there are no serial or cross correlations in the residuals.残差是否为白噪音,与其他序列是否相关
For a fitted V A R ( p ) V A R(p) VAR(p) model, Q k ( m ) Q_{k}(m) Qk(m) statistic of the residuals is asymptotically a χ 2 ( k 2 m − g ) \chi^{2}\left(k^{2} m-g\right) χ2(k2m−g), where g g g is the number of estimated parameters in the VAR coefficient matrices.
Multivariate Portmanteau Tests/Ljung-Box Statistics Q ( m ) Q(m) Q(m) :
- H 0 : ρ 1 = ⋯ = ρ m = 0 H_{0}: \rho_{1}=\cdots=\rho_{m}=0 H0:ρ1=⋯=ρm=0 v.s. H a : ρ i ≠ 0 H_{a}: \rho_{i} \neq 0 Ha:ρi=0 for some i i i.
- Under the null hypothesis and some regularity conditions
Q k ( m ) = T 2 ∑ l = 1 m 1 T − I tr ( Γ ^ l ′ Γ ^ 0 − 1 Γ ^ , Γ ^ 0 − 1 ) ⟶ D χ 2 ( k 2 m ) Q_{k}(m)=T^{2} \sum_{l=1}^{m} \frac{1}{T-I} \operatorname{tr}\left(\widehat{\Gamma}_{l}^{\prime} \widehat{\Gamma}_{0}^{-1} \widehat{\Gamma}_{,} \widehat{\Gamma}_{0}^{-1}\right) \stackrel{\mathcal{D}}{\longrightarrow} \chi^{2}\left(k^{2} m\right) Qk(m)=T2l=1∑mT−I1tr(Γ l′Γ 0−1Γ ,Γ 0−1)⟶Dχ2(k2m)
where k k k is the dimension of z t z_{t} zt and t r t r tr is the sum of diagonal elements.
A Revisit of the Motivating Example
- 估计VAR,选择lag的长度 model1
- Granger 因果检验/Block 外生性检验 : 二者确实有关系,需要用VAR
- lag exclusion test
- 得到Model 2,VAR restriction 继续做lag exclusion 直到所有的系数都显著
- 发现系数不显著时,还要做一个系数都为0的联合检验,使用wald coefficient test
- model checking: residual test
- 看一下脉冲响应函数