θ
^
=
(
X
T
X
+
λ
I
)
−
1
X
y
\hat{\theta} = (X^TX + \lambda I)^{-1}Xy
θ^=(XTX+λI)−1Xy
For any vector
v
∈
R
d
v \in \mathbb{R}^d
v∈Rd, we have
v
T
X
T
X
v
=
(
X
v
)
T
(
X
v
)
>
=
0
v^TX^TXv = (Xv)^T(Xv) >= 0
vTXTXv=(Xv)T(Xv)>=0, so it is a positive semi-definite. Also, we know
λ
>
0
\lambda >0
λ>0 so that
λ
I
\lambda I
λI is already a positive definite matrix.
Therefore,
v
T
(
X
T
X
+
λ
I
)
v
=
(
X
v
)
T
(
X
v
)
+
λ
v
T
I
v
>
=
v
T
(
λ
)
v
>
0
v^T(X^TX + \lambda I)v=(Xv)^T(Xv) + \lambda v^TIv >= v^T(\lambda)v>0
vT(XTX+λI)v=(Xv)T(Xv)+λvTIv>=vT(λ)v>0
is a positve definite and hence invertible.
Ridge Linear Regression Estimation Invertible
最新推荐文章于 2019-09-04 12:52:27 发布