Dig into Estimation of VAR Coefficients, IRFs, and Variance Decomposition in Stata

Motivation

While people can always use integrated commands in Stata (e.g., var and svar) to produce abundant outputs of VAR estimation, many can not confidently interpret these results without knowing how they are theoretically defined and practically calculated. Manually replicating the outputs of the Stata’s integrated commands for VAR estimation will be very helpful to resolve this issue.

In this blog, I will firstly provide the theoretical calculation formulas of reduced-form coefficients, IRFs, Structural IRFs, Orthogonalized IRFs, and Forecast Error Variance Decomposition. Then I will follow their theoretical definition to manually calculate these outputs in Stata. Finally, I will compare the manually computed results with the outputs generated from Stata’s integrated commands (see in the last blog) and cross-check the validity of my manual computation.

By doing so, I hope this blog can provide precise insights on how the VAR outputs are produced in Stata and help the readers confidently use these results in their own research.

For readers with time constraints, all the codes used in this blog can be approched via this link.

This blog is the last one of my 3-blog series about VAR model. I show the basic logics of VAR model with the simplest 2-variable, 1-lag VAR model in the first blog and show how to use var and svar commands to conveniently estimate the VAR model in Stata in the second blog.

Set Benchmark

To get reliable benchmarks, I use the integrated command svar to generate all the major outputs of the VAR estimation with the same dataset and lag orders as what’ve showed in the last blog.

The codes are as follows. The readers can see more explanations about these commands in the last blog.

**#  generate benchmark
use varsample.dta, clear
tsset yq
matrix A1 = (1,0,0 \ .,1,0 \ .,.,1)
matrix B1 = (.,0,0 \ 0,.,0 \ 0,0,.)
svar inflation unrate ffr, lags(1/7) aeq(A1) beq(B1)
irf create forblog, step(15) set(myirf) replace
mat sigma_e_bench = e(Sigma)
mat B_bench = e(A)
mat beta_bench = e(b_var)

Theoretical Definitions and Calculation Formulas

Define the VAR system

The starting point is a structural VAR system with k k k variables and p p p lags.
B x t = κ + Σ i = 1 p Γ i x t − i + ϵ t (1) Bx_t=\kappa+\Sigma_{i=1}^{p}\Gamma_i x_{t-i}+\epsilon_t \tag{1} Bxt=κ+Σi=1pΓixti+ϵt(1)
Premultiplying Equation (1) with matrix B − 1 B^{-1} B1, one will get the reduced-form VAR.
x t = ν + Σ i = 1 p A i x t − i + e t (2) x_t=\nu+\Sigma_{i=1}^{p}A_i x_{t-i}+e_t \tag{2} xt=ν+Σi=1pAixti+et(2)

Calculate reduced-form coefficients A i ( i = 1 , 2 , . . , p ) A_i(i=1,2,..,p) Ai(i=1,2,..,p)

As the OLS estimation is proved to be consistent and asymptotically efficient in reduced form (e.g., Enders, 2004), the reduced-form coefficients A i ( i = 1 , 2 , . . , p ) A_i(i=1,2,..,p) Ai(i=1,2,..,p) are estimated directly from the OLS.

That said, if we put x t x_t xt into a matrix Y Y Y and put x t − i ( i = 1 , 2 , . . , p ) x_{t-i} (i=1,2,..,p) xti(i=1,2,..,p) in to a matrix X X X, the reduced-form coefficient matrix can A i ( i = 1 , 2 , . . , p ) A_i(i=1,2,..,p) Ai(i=1,2,..,p) be generated from the OLS coefficients
β = ( X ′ X ) − 1 ( X ′ Y ) (3) \beta = (X'X)^{-1}(X'Y) \tag{3} β=(XX)1(XY)(3)

Calculate Impulse Response Coefficients Φ ( L ) \Phi(L) Φ(L)

Following Lütkepohl (2007, p22-23), the reduced-form VAR system defined in Equation (2) could be written as the following Vector Moving Average (VMA) representation.
x t = μ + Φ ( L ) e t = μ + ∑ i = 0 ∞ Φ i e t − i (4) x_t = \mu+\Phi(L)e_t = \mu+\sum_{i=0}^{\infty}\Phi_ie_{t-i} \tag{4} xt=μ+Φ(L)et=μ+i=0Φieti(4)
The tricks are as follows. Firstly you re-write the lag terms using lag operators ( A 1 L + ⋯ + A p L p ) x t \left(A_1 L+\cdots+A_p L^p\right) x_t (A1L++ApLp)xt, then you move them into the left hand side of the equation and get
A ( L ) x t = ν + e t (5) A(L)x_t = \nu+e_t \tag{5} A(L)xt=ν+et(5)
where
A ( L ) = I K − A 1 L − ⋯ − A p L p A(L)=I_K-A_1 L-\cdots-A_p L^p A(L)=IKA1LApLp
Let
Φ ( L ) = ∑ i = 0 ∞ Φ i L i \Phi(L)=\sum_{i=0}^{\infty} \Phi_i L^i Φ(L)=i=0ΦiLi
be an operator such that
Φ ( L ) A ( L ) = I k \Phi(L)A(L)=I_k Φ(L)A(L)=Ik
where k k k is the number of variables in the VAR system.

Premultiply Equation (5) with Φ ( L ) \Phi(L) Φ(L). Call the mean factor Φ ( L ) ν \Phi(L)\nu Φ(L)ν as μ \mu μ, we get the VMA representation of the VAR system in Equation (6).
x t = Φ ( L ) ν + Φ ( L ) e t = μ + ∑ i = 0 ∞ Φ i e t − i (6) x_t = \Phi(L)\nu+\Phi(L)e_t = \mu+\sum_{i=0}^{\infty}\Phi_ie_{t-i} \tag{6} xt=Φ(L)ν+Φ(L)et=μ+i=0Φieti(6)
Intuitively, the coefficients of the VMA representation Φ i ( i = 1 , 2 , . . . , ∞ ) \Phi_i (i=1,2,...,\infty) Φi(i=1,2,...,) captures how does a unit shock on e t − i e_{t-i} eti influences the variables of interest x t x_t xt, which are the so-called Impulse Response Functions (IRFs) by definition.

Lütkepohl (2007, p22-23) proves that the Φ i \Phi_i Φi, or so-called IRFs, can be computed recursively using
Φ 0 = I k Φ i = Σ j = 1 i Φ i − j A j (7) \Phi_0 = I_k\\ \Phi_i = \Sigma_{j=1}^{i}\Phi_{i-j}A_j \tag{7} Φ0=IkΦi=Σj=1iΦijAj(7)
where A j ( j = 1... p ) A_j (j=1...p) Aj(j=1...p) are coefficients for j j j-lag variables of the reduced-form VAR system, which we have estimated through OLS.

Calculate Orthogonalized IRFs $\Theta(L) $

Typically, there are correlations among reduced-form innovations, which makes it hard to separate the impacts of one innovation from another. In such case, to better separate the effects of one innovation from another, people usually apply Cholesky factorization of the variance-covariance matrix Σ e \Sigma_e Σe, which derives
Σ e = P P ′ \Sigma_e = PP' Σe=PP
where P P P is a lower triangular matrix.

Following Lütkepohl (2007, p46), the VMA representation of VAR, Equation (6), can be re-written as
x t = μ + ∑ i = 0 ∞ Φ i P P − 1 e t − i = μ + ∑ i = 0 ∞ Θ i w t − i (8) x_t=\mu+\sum_{i=0}^{\infty} \Phi_i P P^{-1} e_{t-i}=\mu+\sum_{i=0}^{\infty} \Theta_i w_{t-i} \tag{8} xt=μ+i=0ΦiPP1eti=μ+i=0Θiwti(8)
where Θ i = Φ i P \Theta_i=\Phi_i P Θi=ΦiP and w t = P − 1 e t w_t=P^{-1} e_t wt=P1et. The following equation proves that w t w_t wt is white noise with covariance matrix of identity matrix I k I_k Ik, which implies the innovations w t w_t wt have uncorrelated components. Thus, w t w_t wt are also called orthogonal innovations.
Σ w = P − 1 Σ e ( P − 1 ) ′ = I K (9) \Sigma_w=P^{-1} \Sigma_e\left(P^{-1}\right)^{\prime}=I_K \tag{9} Σw=P1Σe(P1)=IK(9)
Correspondingly, the new IRF coefficients Θ i = Φ i P \Theta_i=\Phi_i P Θi=ΦiP which functions on the orthogonal innovations w t w_t wt are called orthogonalized IRFs. In other words, the orthogonalized IRFs are given by the following Equation.
Θ i = Φ i P (10) \Theta_i=\Phi_i P \tag{10} Θi=ΦiP(10)
where Φ i ( i = 1 , 2 , . . . , ∞ ) \Phi_i(i=1,2,...,\infty) Φi(i=1,2,...,) are the IRFs and P P P is the Cholesky decomposition of the variance-covariance matrix Σ e \Sigma_e Σe such that Σ e = P P ′ \Sigma_e = PP' Σe=PP

Clearly, the calculation of P P P needs the covariance matrix of residuals Σ e \Sigma_e Σe.

Recall that we’ve put x t x_t xt into a matrix Y t Y_t Yt and put x t − i ( i = 1 , 2 , . . , p ) x_{t-i} (i=1,2,..,p) xti(i=1,2,..,p) in to a matrix X X X , the reduced-form coefficient matrix A A A are the OLS coefficients, which we have estimated in Equation (3). Then the residuals of VAR estimation e e e are by definition
e = Y − X A (11) e = Y-XA \tag{11} e=YXA(11)
The variance-covariance matrix of the residuals Σ e \Sigma_e Σe can be calculated with
Σ e = E [ ( e − e ˉ ) ( e − e ˉ ) ′ ] (12) \Sigma_e =E[(e-\bar{e})(e-\bar{e})']\tag{12} Σe=E[(eeˉ)(eeˉ)](12)

Calculate (Orthogonalized) Structural IRFs Ψ ( L ) \Psi(L) Ψ(L)

The structural IRFs by definition captures the impacts of one-unit structural shock ϵ t \epsilon_t ϵt on the variables of interest x t x_t xt. Recall that e t = B − 1 ϵ t e_t = B^{-1}\epsilon_t et=B1ϵt, one can conveniently compute the structural IRFs by plugging it into the Equation (6).
x t = μ + Φ ( L ) e t = μ + Φ ( L ) B − 1 ϵ t = μ + Λ ( L ) ϵ t (13) x_t =\mu+\Phi(L)e_t =\mu+\Phi(L)B^{-1}\epsilon_t=\mu+\Lambda(L)\epsilon_t \tag{13} xt=μ+Φ(L)et=μ+Φ(L)B1ϵt=μ+Λ(L)ϵt(13)
One can intuitively infer from the Equation (13) that the structural IRFs (SIRFs) should be the IRFs Φ i \Phi_i Φi postmultiplied by B − 1 B^{-1} B1 as in Equation (9).
Λ i = Φ i B − 1 \Lambda_i=\Phi_iB^{-1} Λi=ΦiB1

Clearly, the matrix B B B is the prerequisite for estimating Structural IRFs Λ ( L ) \Lambda(L) Λ(L).

Given that Stata’s var command by default assumes B B B is a k × k k\times k k×k lower triangular matrix with ones in the diagonal, we could figure out the unresovled elements B i j B_{ij} Bij from the relations of the reduced-form residuals.
B = ( 1 0 0 . . . 0 B 21 1 0 . . . 0 B 31 B 32 1 . . . 0 . . . . . . . . . . . . . . . B k 1 B k 2 B k 3 . . . 1 ) B=\left(\begin{array}{ccc} 1 & 0 & 0&...&0 \\ B_{21} & 1 & 0&...&0 \\ B_{31} & B_{32} & 1&...&0\\ ...&...&...&...&...&\\ B_{k1}&B_{k2}&B_{k3}&...&1 \end{array}\right) B= 1B21B31...Bk101B32...Bk2001...Bk3...............000...1
In particular, we could start from ϵ t = B e t \epsilon_t = Be_t ϵt=Bet, which implies
KaTeX parse error: Expected 'EOF', got '\right' at position 106: …kt} \end{array}\̲r̲i̲g̲h̲t̲)=\left(\begin{…
Writing in a loose form
e 1 t = ϵ 1 t e 2 t = ϵ 2 t − B 21 e 1 t e 3 t = ϵ 3 t − B 31 e 1 t − B 32 e 2 t . . . e k t = ϵ k t − B k 1 e 1 t − B k 2 e 2 t − . . . − B k k − 1 e k − 1 t \begin{aligned}e_{1t} &= \epsilon_{1t}\\ e_{2t} &= \epsilon_{2t}- B_{21}e_{1t}\\ e_{3t} &= \epsilon_{3t}- B_{31}e_{1t}-B_{32}e_{2t}\\ ...\\ e_{kt}& = \epsilon_{kt}-B_{k1}e_{1t}-B_{k2}e_{2t}-...-B_{kk-1}e_{k-1t} \end{aligned} e1te2te3t...ekt=ϵ1t=ϵ2tB21e1t=ϵ3tB31e1tB32e2t=ϵktBk1e1tBk2e2t...Bkk1ek1t
Note that the ϵ i t \epsilon_{it} ϵit are real numbers in the above regressions. That said by running the above k k k regressions on the reduced-form residuals e i t ( i = 1 , . . . , k ) e_{it}(i=1,...,k) eit(i=1,...,k), one can obtain all the unresolved elements in the matrix B B B. For example, by regressing e k t e_{kt} ekt on e 1 t e_{1t} e1t to e k − 1 t e_{k-1t} ek1t, one can get the coefficients B k 1 B_{k1} Bk1 to B k k − 1 B_{kk-1} Bkk1, which are components of the last row in the matrix B B B.

While people usually assume that the structural shocks ϵ t \epsilon_t ϵt are orthogonal unit impulse, it is not always the case in the dataset. Most of the time, while there are few contemporaneous correlations among different structural shocks, the structural shocks do have non-unit variance. In such case, people typically standardize structural IRFs into unit shocks by introducing a factorization of the covariance matrix of the structural shocks Σ ϵ \Sigma_\epsilon Σϵ , just like what we did to get the Orthogonalized IRFs in the last subsection.

The covariance matrix of the structural shocks Σ ϵ \Sigma_\epsilon Σϵ can be computed based on e t = B − 1 ϵ t e_t = B^{-1}\epsilon_t et=B1ϵt .
Σ ϵ = E ( ϵ ϵ ′ ) = E ( B e t e t ′ B ′ ) = B Σ e B ′ \Sigma_\epsilon = E(\epsilon\epsilon')=E(Be_tet'B')=B\Sigma_eB' Σϵ=E(ϵϵ)=E(BetetB)=BΣeB
We decompose Σ ϵ \Sigma_\epsilon Σϵ such that
Σ ϵ = P 1 P 1 ′ \Sigma_\epsilon = P_1P_1' Σϵ=P1P1
The orthogonalized structural IRFs would be
Ψ i = Λ i P 1 = Φ i B − 1 P 1 (14) \Psi_i=\Lambda_i P_1=\Phi_iB^{-1}P_1\tag{14} Ψi=ΛiP1=ΦiB1P1(14)

Calculate Forecast Error Variance Decomposition (FEVD)

Following Lütkepohl (2007, p63-64), the error in the optimal h h h -step forecast can be derived from the VMA representation in Equation (6).
x t + h − x t ( h ) = ∑ i = 0 h − 1 Φ i e t + h − i = ∑ i = 0 h − 1 Φ i P P − 1 e t + h − i = ∑ i = 0 h − 1 Θ i w t + h − i . (15) \begin{aligned} x_{t+h}-x_t(h) & =\sum_{i=0}^{h-1} \Phi_i e_{t+h-i}\\ & =\sum_{i=0}^{h-1} \Phi_i P P^{-1} e_{t+h-i} \\ & =\sum_{i=0}^{h-1} \Theta_i w_{t+h-i} . \end{aligned} \tag{15} xt+hxt(h)=i=0h1Φiet+hi=i=0h1ΦiPP1et+hi=i=0h1Θiwt+hi.(15)
As the w t w_t wt is white noise with identity matrix I k I_k Ik as covariance matrix, the variance of each factor in w t w_t wt is 1. Denote the j s js js-th element of i i i-step orthogonalized IRFs Θ i \Theta_i Θias θ j s , i \theta_{js,i} θjs,i , the h h h-step forecast Mean Squared Error (MSE) of variable j j j can be written as the sum of the squared orthogonalized IRFs.
MSE ⁡ [ x j , t ( h ) ] = ∑ i = 0 h − 1 ∑ s = 1 k θ j s , i 2 (16) \operatorname{MSE}\left[x_{j, t}(h)\right]=\sum_{i=0}^{h-1} \sum_{s=1}^k \theta_{j s, i}^2 \tag{16} MSE[xj,t(h)]=i=0h1s=1kθjs,i2(16)
where θ j s , i \theta_{js,i} θjs,i is the orthogonalized IRF from innovation from variable s s s to variable j j j with i i i-step lag(s). k k k is the variable numbers in the VAR system.

The h h h-step variance contribution of innovations from variable s s s to variable j j j is by definition
Ω j s , h = ∑ i = 0 h − 1 θ j s , i 2 ∑ i = 0 h − 1 ∑ s = 1 k θ j s , i 2 (17) \Omega_{js,h} = \frac{\sum_{i=0}^{h-1} \theta_{j s, i}^2}{\sum_{i=0}^{h-1} \sum_{s=1}^k \theta_{j s, i}^2} \tag{17} Ωjs,h=i=0h1s=1kθjs,i2i=0h1θjs,i2(17)

Summary of formulas

For a structural VAR system with k k k variables and p p p lags.

  • The reduced-form coefficients A i ( i = 1 , 2 , . . , p ) A_i(i=1,2,..,p) Ai(i=1,2,..,p) are the coefficients of the OLS estimation for the reduced-form VAR
  • The Impulse Response Coefficients Φ ( L ) \Phi(L) Φ(L) can be estimated with Equation (7). The prerequisite is the reduced-form coefficients A i ( i = 1 , 2 , . . , p ) A_i(i=1,2,..,p) Ai(i=1,2,..,p)
  • The Orthogonalized IRFs Θ ( L ) \Theta(L) Θ(L) can be obtained by postmultipling IRF matrix Φ ( L ) \Phi(L) Φ(L) with P P P, where Σ e = P P ′ \Sigma_e = PP' Σe=PP. The prerequisites are the IRF Φ ( L ) \Phi(L) Φ(L) and the variance-covariance matrix of residuals Σ e \Sigma_e Σe
  • The orthogonalized Structural IRFs Ψ ( L ) \Psi(L) Ψ(L) can be obtained by postmultipling IRF matrix Φ ( L ) \Phi(L) Φ(L) with B − 1 P 1 B^{-1}P_1 B1P1. The prerequisite are the IRF Φ ( L ) \Phi(L) Φ(L), contemporaneous effect matrix B B B , and covariance matrix of structural shocks Σ ϵ \Sigma_\epsilon Σϵ such that Σ ϵ = P 1 P 1 ′ \Sigma_\epsilon=P_1P_1' Σϵ=P1P1
  • The Forecast Error Variance Decomposition Ω j s , h \Omega_{js,h} Ωjs,h can be obtained by calculating the sum of the squared orthogonalized IRFs of impulse s s s on equation j j j from the 1st step to h − 1 h-1 h1 step standardized by the Mean Squared Forecast Error, which could also be calculated from the orthogonalized IRFs Θ ( L ) \Theta(L) Θ(L). The prerequisite is only the orthogonalized IRFs Θ ( L ) \Theta(L) Θ(L)

Manually Compute the VAR outputs in Stata

Import Data and define global variables

To keep consistent with the benchmark, where we produce all these outputs using integrated commands var in the last blog, I will use the same sample dataset and choose the same number of lag orders as 7.

That said, the VAR system we will replicate has 3 variables and 7 lags. The model is as follows.
KaTeX parse error: Expected 'EOF', got '\right' at position 74: …{t} \end{array}\̲r̲i̲g̲h̲t̲]=A_0+A_1 \left…
Please see codes for this step below.

* prepare data
sysuse varsample.dta, clear
tsset yq

* define global variables
global names "inflation unrate ffr"
global lagorder 7
global numnames 3

Compute reduced-form coefficients A i ( i = 1 , 2 , . . , p ) A_i(i=1,2,..,p) Ai(i=1,2,..,p)

To compute reduced-form coefficients A i ( i = 1 , 2 , . . , p ) A_i(i=1,2,..,p) Ai(i=1,2,..,p), we put x t x_t xt into a matrix Y Y Y and put x t − i ( i = 1 , 2 , . . , p ) x_{t-i} (i=1,2,..,p) xti(i=1,2,..,p) in to a matrix X X X, the reduced-form coefficient matrix A A A can be generated from the OLS coefficients
β = ( X ′ X ) − 1 ( X ′ Y ) \beta = (X'X)^{-1}(X'Y) β=(XX)1(XY)
The codes are as follows. I firstly generate the 1 to p p p lag of each variable in the VAR system. And then put them all together into a matrix X X X, put the contemporaneous variables into a matrix Y Y Y. The coefficients of OLS estimation are stored in matrix beta, which is a 22 × 3 22\times3 22×3 matrix, where 22 = 3$\times$7+1.

* generate lag variables 
foreach var in $names{
	forvalues j = 1/$lagorder{
		cap g l`j'`var' = l`j'."`var'"
	}
}

* put x and y of the reduced-form VAR into the matrix
mkmat $names, matrix(Y)
mat Y = Y[$lagorder+1..rowsof(y), 1..colsof(Y)]
mkmat l*inflation l*unrate l*ffr, matrix(X) nomiss
mat X = (X, J(rowsof(X), 1, 1))

* estimate the OLS coefficients of the reduced-form VAR
mat beta = inv(X'*X)*(X'*Y)

The manually computed reduce-form coefficient matrix beta is as follows.

. mat list beta

beta[22,3]
              inflation      unrate         ffr
l1inflation   1.1624496  -1.2311559  -4.7074491
l2inflation  -.38442299   3.4434416    14.62629
l3inflation   .33067581  -.10661715  -13.797021
l4inflation  -.19803137   1.2593467   18.506895
l5inflation   .25709458  -2.6615204  -8.1949518
l6inflation  -.08613706   .80816382   14.682903
l7inflation  -.07969876  -1.4801693  -22.071136
   l1unrate  -.00704289   1.3298207  -2.0448181
   l2unrate   .00719199   .00238958   1.5423312
   l3unrate   .00217432  -.34914135   .45998626
   l4unrate   .00246865  -.18703379   .28127488
   l5unrate  -.00964455   .26099715   .38580282
   l6unrate   .00855645  -.08923544  -1.1106103
   l7unrate  -.00384815   -.0158381   .44636919
      l1ffr    .0000775    .0391868   .28705469
      l2ffr   .00092119    .0133781   .17792134
      l3ffr   .00073382   -.0020393   .44964119
      l4ffr   .00051456  -.04211502    .2517726
      l5ffr  -.00075606  -.01041068   .05375644
      l6ffr  -.00027879  -.00274284  -.16801173
      l7ffr  -.00085278   .01797453  -.22845819
         c1   .00707448   .12277881   1.0777927

As we are estimating a VAR system with 3 variables and 7 lags, we need to derive coefficient matrix A i ( i = 1 , 2 , . . , p ) A_i(i=1,2,..,p) Ai(i=1,2,..,p) from β \beta β. Each of the A i A_i Ai is a 3 × 3 3\times 3 3×3 matrix. The j s js js-th element of matrix A i A_i Ai denotes the IRFs of impulse s s s on equation j j j with i i i lags.

* reshape beta to generate reduced-form VAR coefficient matrix a1-ap
forvalues i=1/$lagorder{
	mat A`i' = (beta["l`i'inflation", 1..3]\beta["l`i'unrate", 1..3]\beta["l`i'ffr", 1..3])'
}

I list the three-lag coefficient matrix A 3 A_3 A3 as an example. One can easily find the consistency with the coefficient matrix beta.

. mat list A3

A3[3,3]
           l3inflation     l3unrate        l3ffr
inflation    .33067581    .00217432    .00073382
   unrate   -.10661715   -.34914135    -.0020393
      ffr   -13.797021    .45998626    .44964119

Compute and decompose the covariance matrix Σ e \Sigma_e Σe and Σ ϵ \Sigma_\epsilon Σϵ

The reduced-form residuals e e e are given by
e = Y − X β e = Y-X\beta e=Y
It’s covariance matrix is
Σ e = ( e − e ˉ ) ( e − e ˉ ) ′ n − p \Sigma_e =\frac{(e-\bar{e})(e-\bar{e})'}{n-p} Σe=np(eeˉ)(eeˉ)
where n n n is the number of observations in the dataset and p p p is the number of lag order specified.

The covariance matrix of structural shock ϵ t \epsilon_t ϵt is
Σ ϵ = B Σ e B ′ \Sigma_\epsilon = B\Sigma_eB' Σϵ=BΣeB
I decompose the covariance matrix Σ e \Sigma_e Σe and Σ ϵ \Sigma_\epsilon Σϵ such that
Σ e = P P ′ , Σ ϵ = P 1 P 1 ′ \Sigma_e=PP', \Sigma_\epsilon=P_1P_1' Σe=PP,Σϵ=P1P1
The codes are as follows. Note that I save the reduced-form residuals of inflation, unrate, and ffr equations as e 1 e_1 e1, e 2 e_2 e2, and e 3 e_3 e3 respectively in this step.

* compute sigma_e
mat e=Y-X*beta
mat e=J($lagorder,$numnames,.) \ e
svmat e
mat accum sigma_e = e1 e2 e3, deviations noconstant
mat sigma_e = sigma_e/(_N-$lagorder)
mat list sigma_e

* decompose sigma_e
mat P = cholesky(sigma_e)
mat list P

* compute and decompose sigma_epsilon
mat sigma_epsilon = B*sigma_e*B'
mat P1 = cholesky(sigma_epsilon)
mat list P1

The decomposition results of Σ e \Sigma_e Σe and Σ ϵ \Sigma_\epsilon Σϵ are as follows. As expected, both of them are lower triangular matrix.

. mat list P

P[3,3]
            e1          e2          e3
e1   .00838539           0           0
e2  -.02380259   .26616909           0
e3   .29854297   -.4021685   1.4708232

. mat list P1

symmetric P1[3,3]
            r1          r2          r3
r1   .00838539
r2   3.232e-18   .26616909
r3  -5.172e-17   1.518e-16   1.4708232

Compute the contemporaneous matrix B B B

Note that e t = B − 1 ϵ t e_t = B^{-1}\epsilon_t et=B1ϵt, that means we can estimate the unknown factors B 21 B_{21} B21, B 31 B_{31} B31, and B 32 B_{32} B32 from the relationships of residuals of the reduced-form.
( ϵ 1 , t ϵ 2 , t ϵ 3 , t ) = ( 1 0 0 B 21 1 0 B 31 B 32 1 ) ( e 1 , t e 2 , t e 3 , t ) \left(\begin{array}{ccc}\epsilon_{1,t}\\ \epsilon_{2,t}\\ \epsilon_{3,t} \end{array}\right) =\left(\begin{array}{ccc} 1 & 0 & 0 \\ B_{21} & 1 & 0 \\ B_{31} & B_{32} & 1 \end{array}\right) \left(\begin{array}{ccc}e_{1,t}\\ e_{2,t}\\ e_{3,t} \end{array}\right) ϵ1,tϵ2,tϵ3,t = 1B21B3101B32001 e1,te2,te3,t

To be exact, the above matrix implies
e 2 , t = ϵ 2 , t − B 21 × e 1 , t e 3 , t = ϵ 3 , t − B 31 × e 1 , t − B 32 × e 2 , t \begin{array}{c} e_{2,t} = \epsilon_{2,t}-B_{21}\times e_{1,t}\\ e_{3,t} = \epsilon_{3,t} - B_{31}\times e_{1,t} - B_{32}\times e_{2,t} \end{array} e2,t=ϵ2,tB21×e1,te3,t=ϵ3,tB31×e1,tB32×e2,t
That is to say, we can estimate B 21 B_{21} B21 by regressing the reduced-form residual e 2 , t e_{2,t} e2,t on e 1 , t e_{1,t} e1,t, and we estimate B 31 B_{31} B31 and B 32 B_{32} B32 by regressing the reduced-form residual e 3 , t e_{3,t} e3,t on e 1 , t e_{1,t} e1,t and e 2 , t e_{2,t} e2,t.

* estimate inverse of B
qui reg e2 e1
global B21 = -e(b)[1,1]
qui reg e3 e1 e2
global B31 = -e(b)[1,1]
global B32 = -e(b)[1,2]
* construct matrix B
mat B = (1,0,0 \ $B21,1,0 \ $B31,$B32,1)

The manually constructed matrix B B B is as follows.

. mat list B

B[3,3]
            c1          c2          c3
r1           1           0           0
r2   2.8385778           1           0
r3  -31.313795   1.5109512           1

Compute IRFs Φ ( L ) \Phi(L) Φ(L), Orthogonalized IRFs Θ ( L ) \Theta(L) Θ(L) , and Structural IRFs Ψ ( L ) \Psi(L) Ψ(L)

I follow formula from Lütkepohl (2007) to calculate the IRFs.
Φ 0 = I k Φ i = Σ j = 1 i Φ i − j A j \Phi_0 = I_k\\ \Phi_i = \Sigma_{j=1}^{i}\Phi_{i-j}A_j Φ0=IkΦi=Σj=1iΦijAj
I post-multiply IRF Φ i \Phi_i Φi with P P P to get orthogonalized IRFs Θ i \Theta_i Θi.

I post-multiply IRF Φ i \Phi_i Φi with B − 1 P 1 B^{-1}P_1 B1P1 to get (orthogonalized) structural IRFs Ψ i \Psi_i Ψi.

The number of forecast steps is set to be 15 in this blog.

* estimate IRFs, OIRFs, and SIRFs
mat irf0 = I($numnames)
mat sirf0 = irf0*inv(B)*P1
mat oirf0 = irf0*P
forvalues i=1/15{
	mat irf`i' = J($numnames,$numnames,0)
	forvalues j = 1/$lagorder{
	if `i' >= `j'{
	local temp = `i'-`j'
	mat temp2 = irf`temp'*A`j'
	mat irf`i' = irf`i'+ temp2
}
}
	mat sirf`i' = irf`i'*inv(B)*P1
	mat oirf`i' = irf`i'*P
}

To this stage, we’ve got the IRF, OIRF, and SIRF matrix for each forward-look step. For ease of observing, I collect the outputs of all steps together and write them into a new dataset named fullirfs.dta.

* collect irf matrix into dataset
cap program drop reshapemat 
cap program define reshapemat
cap mat drop c`1'
forvalues i=0/15{
	mat colnames `1'`i' = $names
	mat rownames `1'`i' = $names
	mat temp1=vec(`1'`i')
	mat c`1' = nullmat(c`1') \ temp1
}
mat colnames c`1' = "`1'"
end

* write identifiers
local irfnames "irf sirf oirf"
cap mat drop fullirfs
foreach name in `irfnames'{
	reshapemat  `name'
	mat fullirfs = (nullmat(fullirfs), c`name')
}
mat list fullirfs

* save matrix as dataset
clear
svmat fullirfs, names(col)
g rownames = ""
local rownames : rowfullnames coirf
local c : word count `rownames'
forvalues i = 1/`c' {
    qui replace rownames = "`:word `i' of `rownames''" in `i'
}

* get impulse and response
split rownames, p(":")
rename rownames2 response
rename rownames1 impulse
drop rownames

* tag forward-looking steps
g step = floor((_n-1)/9)

save fullirfs, replace

Please see below the first 18 rows of this dataset fullirfs.dta, which contains all the manually collected IRFs.

. list impulse response step *irf in 1/18

     +------------------------------------------------------------------+
     |   impulse    response   step         irf        sirf        oirf |
     |------------------------------------------------------------------|
  1. | inflation   inflation      0           1    .0083854    .0083854 |
  2. | inflation      unrate      0           0   -.0238026   -.0238026 |
  3. | inflation         ffr      0           0     .298543     .298543 |
  4. |    unrate   inflation      0           0           0           0 |
  5. |    unrate      unrate      0           1    .2661691    .2661691 |
     |------------------------------------------------------------------|
  6. |    unrate         ffr      0           0   -.4021685   -.4021685 |
  7. |       ffr   inflation      0           0           0           0 |
  8. |       ffr      unrate      0           0           0           0 |
  9. |       ffr         ffr      0           1    1.470823    1.470823 |
 10. | inflation   inflation      1     1.16245    .0099384    .0099384 |
     |------------------------------------------------------------------|
 11. | inflation      unrate      1   -1.231156    -.030278    -.030278 |
 12. | inflation         ffr      1   -4.707449    .0948963    .0948963 |
 13. |    unrate   inflation      1   -.0070429   -.0019058   -.0019058 |
 14. |    unrate      unrate      1    1.329821    .3381975    .3381975 |
 15. |    unrate         ffr      1   -2.044818   -.6597117   -.6597117 |
     |------------------------------------------------------------------|
 16. |       ffr   inflation      1    .0000775     .000114     .000114 |
 17. |       ffr      unrate      1    .0391868    .0576369    .0576369 |
 18. |       ffr         ffr      1    .2870547    .4222067    .4222067 |
     +------------------------------------------------------------------+

Compute the Forecast Error Variance Decomposition (FEVD)

Based on the collected IRF dataset fullirfs.dta, I follow the following formula to decompose the mean squared forecast errors , where θ j s , i \theta_{js,i} θjs,i is the orthogonalized IRF from impulse of variable s s s to variable j j j with i i i-step lag(s). The results of each step’s Mean Squared Error (MSE) and variance decomposition are saved into a new dataset fevds.dta.
Ω j s , h = ∑ i = 0 h − 1 θ j s , i 2 ∑ i = 0 h − 1 ∑ s = 1 k θ j s , i 2 \Omega_{js,h} = \frac{\sum_{i=0}^{h-1} \theta_{j s, i}^2}{\sum_{i=0}^{h-1} \sum_{s=1}^k \theta_{j s, i}^2} Ωjs,h=i=0h1s=1kθjs,i2i=0h1θjs,i2

* import oirfs
use fullirfs, replace

* calculate the sqaured irfs
g sqoirf = oirf^2

* calculate the MSE and variables' variance contribution for each step
sort step response impulse
by step response: egen temp = sum(sqoirf)
sort response impulse step
by response impulse: g mse = sum(temp)
by response impulse: g cvarcontri = sum(sqoirf)

* calculate fevd
g fevd_manual = cvarcontri/mse
keep step response impulse mse fevd_manual

* adjust forward-looking steps
replace step = step +1 

save fevds, replace

Please see below the first 18 rows of the dataset fevds.dta, which contains the MSE and variance decomposition results.

. list impulse response step fevd_manual mse in 1/18

     +----------------------------------------------------+
     |   impulse    response   step   fevd_m~l        mse |
     |----------------------------------------------------|
  1. | inflation   inflation      1          1   .0000703 |
  2. |    unrate   inflation      1          0   .0000703 |
  3. |       ffr   inflation      1          0   .0000703 |
  4. | inflation      unrate      1   .0079337   .0714126 |
  5. |    unrate      unrate      1   .9920663   .0714126 |
     |----------------------------------------------------|
  6. |       ffr      unrate      1          0   .0714126 |
  7. | inflation         ffr      1   .0369184   2.414188 |
  8. |    unrate         ffr      1   .0669954   2.414188 |
  9. |       ffr         ffr      1   .8960862   2.414188 |
 10. | inflation   inflation      2   .9788982   .0001727 |
     |----------------------------------------------------|
 11. |    unrate   inflation      2   .0210266   .0001727 |
 12. |       ffr   inflation      2   .0000752   .0001727 |
 13. | inflation      unrate      2   .0078057   .1900288 |
 14. |    unrate      unrate      2   .9747127   .1900288 |
 15. |       ffr      unrate      2   .0174816   .1900288 |
     |----------------------------------------------------|
 16. | inflation         ffr      2    .032316   3.036672 |
 17. |    unrate         ffr      2   .1965833   3.036672 |
 18. |       ffr         ffr      2   .7711006   3.036672 |
     +----------------------------------------------------+

Compare Manually Computed Results with Benchmark

To check the validity of my manually computations, I list these computations and the benchmark together to see whether they are consistent.

Check reduced-form coefficients

I firstly reshape the coefficient matrix of the benchmark to make sure it has the same shape as my manually computed coefficient matrix beta.

* reshape 
mata
	beta_bench = st_matrix("beta_bench")
	beta_bench = rowshape(beta_bench, $numnames)'
	st_matrix("beta_bench", beta_bench)
end

mat betas = (beta, beta_bench)
mat list betas

Then I put them together into a matrix called betas, where the first three columns are manually computed coefficients and the last three columns are coefficients produced by svar. The betas matrix is as follows. Clearly, they are exactly the same.

. mat list betas

betas[22,6]
              inflation      unrate         ffr          c1          c2          c3
l1inflation   1.1624496  -1.2311559  -4.7074491   1.1624496  -1.2311559  -4.7074491
l2inflation  -.38442299   3.4434416    14.62629  -.38442299   3.4434415    14.62629
l3inflation   .33067581  -.10661715  -13.797021   .33067581  -.10661715  -13.797021
l4inflation  -.19803137   1.2593467   18.506895  -.19803137   1.2593467   18.506895
l5inflation   .25709458  -2.6615204  -8.1949518   .25709458  -2.6615204  -8.1949518
l6inflation  -.08613706   .80816382   14.682903  -.08613706   .80816382   14.682903
l7inflation  -.07969876  -1.4801693  -22.071136  -.07969876  -1.4801693  -22.071136
   l1unrate  -.00704289   1.3298207  -2.0448181  -.00704289   1.3298207  -2.0448181
   l2unrate   .00719199   .00238958   1.5423312   .00719199   .00238958   1.5423312
   l3unrate   .00217432  -.34914135   .45998626   .00217432  -.34914135   .45998626
   l4unrate   .00246865  -.18703379   .28127488   .00246865  -.18703379   .28127488
   l5unrate  -.00964455   .26099715   .38580282  -.00964455   .26099715   .38580282
   l6unrate   .00855645  -.08923544  -1.1106103   .00855645  -.08923544  -1.1106103
   l7unrate  -.00384815   -.0158381   .44636919  -.00384815   -.0158381   .44636919
      l1ffr    .0000775    .0391868   .28705469    .0000775    .0391868   .28705469
      l2ffr   .00092119    .0133781   .17792134   .00092119    .0133781   .17792134
      l3ffr   .00073382   -.0020393   .44964119   .00073382   -.0020393   .44964119
      l4ffr   .00051456  -.04211502    .2517726   .00051456  -.04211502    .2517726
      l5ffr  -.00075606  -.01041068   .05375644  -.00075606  -.01041068   .05375644
      l6ffr  -.00027879  -.00274284  -.16801173  -.00027879  -.00274284  -.16801173
      l7ffr  -.00085278   .01797453  -.22845819  -.00085278   .01797453  -.22845819
         c1   .00707448   .12277881   1.0777927   .00707448   .12277881   1.0777927

Check parameter matrix Σ e \Sigma_e Σe, Σ ϵ \Sigma_\epsilon Σϵ and B B B

The codes and results are as follows. Clearly, they are exactly the same.

. * check variance-covariance matrix
. mat list sigma_e
symmetric sigma_e[3,3]
            e1          e2          e3
e1   .00007031
e2  -.00019959   .07141255
e3    .0025034  -.11415092   2.4141883

. mat list sigma_e_bench
symmetric sigma_e_bench[3,3]
            inflation      unrate         ffr
inflation   .00007031
   unrate  -.00019959   .07141255
      ffr    .0025034  -.11415092   2.4141882

. * check matrix B
. mat list B
B[3,3]
            c1          c2          c3
r1           1           0           0
r2   2.8385778           1           0
r3  -31.313795   1.5109512           1
. mat list B_bench
B_bench[3,3]
            inflation      unrate         ffr
inflation           1           0           0
   unrate   2.8385778           1           0
      ffr  -31.313794   1.5109512           1

Check IRF, OIRF, and SIRF

I merge my manually computed IRF dataset fullirfs.dta with the output dataset myirf.irf produced by svar command, with the joining keys of impulse name, response name, and forward-looking step.

* check irfs
use myirf.irf, replace
rename *irf b*irf
joinby impulse response step using fullirfs
order impulse response step birf irf boirf oirf bsirf sirf
format *irf %6.0g
list impulse response step birf irf boirf oirf bsirf sirf in 1/20

I add b to the name of all the IRFs in the benchmark dataset and list the first 18 rows of the comparison as follows. Clearly, they are exactly the same.

. list impulse response step birf irf boirf oirf bsirf sirf in 1/18

     +------------------------------------------------------------------------------------------+
     |   impulse    response   step      birf       irf     boirf      oirf     bsirf      sirf |
     |------------------------------------------------------------------------------------------|
  1. | inflation   inflation      0         1         1     .0084     .0084     .0084     .0084 |
  2. |    unrate   inflation      0         0         0         0         0         0         0 |
  3. |       ffr   inflation      0         0         0         0         0         0         0 |
  4. | inflation      unrate      0         0         0    -.0238    -.0238    -.0238    -.0238 |
  5. |    unrate      unrate      0         1         1     .2662     .2662     .2662     .2662 |
     |------------------------------------------------------------------------------------------|
  6. |       ffr      unrate      0         0         0         0         0         0         0 |
  7. | inflation         ffr      0         0         0     .2985     .2985     .2985     .2985 |
  8. |    unrate         ffr      0         0         0    -.4022    -.4022    -.4022    -.4022 |
  9. |       ffr         ffr      0         1         1     1.471     1.471     1.471     1.471 |
 10. | inflation   inflation      1     1.162     1.162     .0099     .0099     .0099     .0099 |
     |------------------------------------------------------------------------------------------|
 11. |    unrate   inflation      1     -.007     -.007    -.0019    -.0019    -.0019    -.0019 |
 12. |       ffr   inflation      1   7.7e-05   7.7e-05   1.1e-04   1.1e-04   1.1e-04   1.1e-04 |
 13. | inflation      unrate      1    -1.231    -1.231    -.0303    -.0303    -.0303    -.0303 |
 14. |    unrate      unrate      1      1.33      1.33     .3382     .3382     .3382     .3382 |
 15. |       ffr      unrate      1     .0392     .0392     .0576     .0576     .0576     .0576 |
     |------------------------------------------------------------------------------------------|
 16. | inflation         ffr      1    -4.707    -4.707     .0949     .0949     .0949     .0949 |
 17. |    unrate         ffr      1    -2.045    -2.045    -.6597    -.6597    -.6597    -.6597 |
 18. |       ffr         ffr      1     .2871     .2871     .4222     .4222     .4222     .4222 |
     +------------------------------------------------------------------------------------------+

Check MSE and FEVD

I merge my manually computed variance decomposition dataset fevds.dta with the output dataset myirf.irf produced by svar command, with the joining keys of impulse name, response name, and forward-looking step. Note that the Mean Squared Error stored in the myirf.irf dataset is actually the square root of MSE, I used its square as the benchmark for my manually computed MSE.

* check fevd
use myirf.irf, replace
rename fevd bfevd
g bmse = mse^2
rename mse rmse
joinby impulse response step using fevds
order impulse response step bfevd fevd_manual bmse mse
list impulse response step bfevd fevd_manual bmse mse in 1/18

I add b to the name of MSE and FEVD in the benchmark dataset and list the first 18 rows of the comparison as follows. Clearly, they are exactly the same.

. list impulse response step bfevd fevd_manual bmse mse in 1/18

     +---------------------------------------------------------------------------+
     |   impulse    response   step       bfevd   fevd_m~l       bmse        mse |
     |---------------------------------------------------------------------------|
  1. | inflation   inflation      1           1          1   .0000703   .0000703 |
  2. |    unrate   inflation      1           0          0   .0000703   .0000703 |
  3. |       ffr   inflation      1           0          0   .0000703   .0000703 |
  4. | inflation      unrate      1   .00793366   .0079337   .0714125   .0714126 |
  5. |    unrate      unrate      1   .99206634   .9920663   .0714125   .0714126 |
     |---------------------------------------------------------------------------|
  6. |       ffr      unrate      1           0          0   .0714125   .0714126 |
  7. | inflation         ffr      1   .03691837   .0369184   2.414188   2.414188 |
  8. |    unrate         ffr      1    .0669954   .0669954   2.414188   2.414188 |
  9. |       ffr         ffr      1   .89608622   .8960862   2.414188   2.414188 |
 10. | inflation   inflation      2   .97889819   .9788982   .0001727   .0001727 |
     |---------------------------------------------------------------------------|
 11. |    unrate   inflation      2   .02102659   .0210266   .0001727   .0001727 |
 12. |       ffr   inflation      2   .00007522   .0000752   .0001727   .0001727 |
 13. | inflation      unrate      2   .00780575   .0078057   .1900288   .1900288 |
 14. |    unrate      unrate      2   .97471266   .9747127   .1900288   .1900288 |
 15. |       ffr      unrate      2   .01748159   .0174816   .1900288   .1900288 |
     |---------------------------------------------------------------------------|
 16. | inflation         ffr      2   .03231605    .032316   3.036672   3.036672 |
 17. |    unrate         ffr      2   .19658335   .1965833   3.036672   3.036672 |
 18. |       ffr         ffr      2   .77110061   .7711006   3.036672   3.036672 |
     +---------------------------------------------------------------------------+

Integrated Codes

**# Manually compute VAR
* prepare data
sysuse varsample.dta, clear

* define global variables
global names "inflation unrate ffr"
global lagorder 7
global numnames 3

* generate lag variables 
foreach var in $names{
	forvalues j = 1/$lagorder{
		cap g l`j'`var' = l`j'."`var'"
	}
}

* put x and y of the reduced-form VAR into the matrix
mkmat $names, matrix(Y)
mat Y = Y[$lagorder+1..rowsof(y), 1..colsof(Y)]
mkmat l*inflation l*unrate l*ffr, matrix(X) nomiss
mat X = (X, J(rowsof(X), 1, 1))

* estimate the OLS coefficients of the reduced-form VAR
mat beta = inv(X'*X)*(X'*Y)
mat list beta

* reshape generate reduced-form VAR coefficient matrix a1-ap
forvalues i=1/$lagorder{
	mat A`i' = (beta["l`i'inflation", 1..3]\beta["l`i'unrate", 1..3]\beta["l`i'ffr", 1..3])'
	mat list A`i'
}

* compute sigma_e
mat e=Y-X*beta
mat e=J($lagorder,$numnames,.) \ e
svmat e
mat accum sigma_e = e1 e2 e3, deviations noconstant
mat sigma_e = sigma_e/(_N-$lagorder)
mat list sigma_e

* decompose sigma_e
mat P = cholesky(sigma_e)
mat list P

* compute and decompose sigma_epsilon
mat sigma_epsilon = B*sigma_e*B'
mat P1 = cholesky(sigma_epsilon)
mat list P1

* estimate inverse of B
qui reg e2 e1
global B21 = -e(b)[1,1]
qui reg e3 e1 e2
global B31 = -e(b)[1,1]
global B32 = -e(b)[1,2]
* construct matrix B
mat B = (1,0,0 \ $B21,1,0 \ $B31,$B32,1)
mat list B


* estimate IRFs, OIRFs, and SIRFs
mat irf0 = I($numnames)
mat sirf0 = irf0*inv(B)*P1
mat oirf0 = irf0*P
forvalues i=1/15{
	mat irf`i' = J($numnames,$numnames,0)
	forvalues j = 1/$lagorder{
	if `i' >= `j'{
	local temp = `i'-`j'
	mat temp2 = irf`temp'*A`j'
	mat irf`i' = irf`i'+ temp2
}
}
	mat sirf`i' = irf`i'*inv(B)*P1
	mat oirf`i' = irf`i'*P
}


* collect irf matrix into dataset fullirfs.dta
cap program drop reshapemat 
cap program define reshapemat
cap mat drop c`1'
forvalues i=0/15{
	mat colnames `1'`i' = $names
	mat rownames `1'`i' = $names
	mat temp1=vec(`1'`i')
	mat c`1' = nullmat(c`1') \ temp1
}
mat colnames c`1' = "`1'"
end

local irfnames "irf sirf oirf"
cap mat drop fullirfs
foreach name in `irfnames'{
	reshapemat  `name'
	mat fullirfs = (nullmat(fullirfs), c`name')
}
mat list fullirfs

clear
svmat fullirfs, names(col)
g rownames = ""
local rownames : rowfullnames coirf
local c : word count `rownames'
forvalues i = 1/`c' {
    qui replace rownames = "`:word `i' of `rownames''" in `i'
}

split rownames, p(":")
rename rownames2 response
rename rownames1 impulse
drop rownames

g step = floor((_n-1)/9)
save fullirfs, replace

* calculate mse and fevd, save them into dataset fevds.dta
use fullirfs, replace
* calculate the sqaured irfs
g sqoirf = oirf^2
* calculate the MSE of each step
sort step response impulse
by step response: egen temp = sum(sqoirf)
sort response impulse step
by response impulse: g mse = sum(temp)
by response impulse: g cvarcontri = sum(sqoirf)
g fevd_manual = cvarcontri/mse
keep step response impulse mse fevd_manual
replace step = step +1 
save fevds, replace

**#  generate benchmark using `svar' command
use varsample.dta, clear
tsset yq
matrix A1 = (1,0,0 \ .,1,0 \ .,.,1)
matrix B1 = (.,0,0 \ 0,.,0 \ 0,0,.)
svar $names, lags(1/7) aeq(A1) beq(B1)
irf create forblog, step(15) set(myirf) replace
mat sigma_e_bench = e(Sigma)
mat B_bench = e(A)
mat beta_bench = e(b_var)

**# Compare manually computed results with benchmark
* check variance-covariance matrix
mat list sigma_e
mat list sigma_e_bench

* check matrix B
mat list B
mat list B_bench

* check reduced-form coefficients
mata
	beta_bench = st_matrix("beta_bench")
	beta_bench = rowshape(beta_bench, $numnames)'
	st_matrix("beta_bench", beta_bench)
end

mat betas = (beta, beta_bench)
mat list betas

* check irfs
use myirf.irf, replace
rename *irf b*irf
joinby impulse response step using fullirfs
order impulse response step birf irf boirf oirf bsirf sirf
format *irf %6.0g
list impulse response step birf irf boirf oirf bsirf sirf in 1/18

* check fevd
use myirf.irf, replace
rename fevd bfevd
g bmse = mse^2
rename mse rmse
joinby impulse response step using fevds
order impulse response step bfevd fevd_manual bmse mse
list impulse response step bfevd fevd_manual bmse mse in 1/18

Summary

In this blog, I firstly theoretically defined the computations of reduced-form coefficients, IRF, OIRF, SIRF, MSE, and FEVD of the VAR model. Then I manually computed all the above outputs in Stata following their theoretical definitions. Finally, I compare my manually computed outputs with the outcomes produced by the integrated command svar to check the validity of my calculations.

While anyone can produce the above results with the integrated command svar in seconds, how are these results are produced is unclear for many people and deter them from confidently using the VAR outputs in their own research. I hope this blog can help to mitigate this issue and help my readers gain more thorough understanding about VAR estimation and more confidently use the outputs of VAR estimation.

Reference

  1. Lütkepohl, Helmut. “New Introduction to Multiple Time Series Analysis.” (2007).
  2. Enders, Walter. “Applied Econometric Time Series, 2nd Edition” (2004) .
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
在Go语言中,`dig.In`是`go-dig`库中的一个函数,用于依赖注入。 依赖注入是一种软件设计模式,它的目的是减少代码之间的耦合性,并提高代码的可测试性和可维护性。在Go语言中,`dig.In`函数是`go-dig`库提供的一个依赖注入功能的入口。 使用`dig.In`函数可以将需要注入的依赖项声明在一个结构体中,并在需要使用这些依赖项的地方进行注入。下面是一个简单的示例: ```go package main import ( "fmt" "go.uber.org/dig" ) type Database struct { // 数据库相关的成员 } type Service struct { Database *Database } func NewService(database *Database) *Service { return &Service{ Database: database, } } func main() { container := dig.New() err := container.Provide(func() *Database { return &Database{} }) if err != nil { fmt.Println("Failed to provide database:", err) return } err = container.Invoke(func(service *Service) { // 使用注入的Service实例 }) if err != nil { fmt.Println("Failed to invoke service:", err) return } } ``` 在上面的示例中,我们首先创建了一个`Database`结构体作为依赖项,然后创建了一个`Service`结构体,并将`Database`注入到`Service`中。最后,通过调用`container.Invoke`函数,`Service`的实例将被注入并传递给回调函数。 请注意,上述示例只是演示了`dig.In`的基本用法。实际使用中,您可能会有更复杂的依赖关系,并且需要在容器中注册更多的提供者和调用函数。您可以在`go-dig`的文档中了解更多详细信息和高级用法。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值