Abstract
Algorithm paper:<Sparse inverse covariance estimation with the graphical lasso>(http://statweb.stanford.edu/~tibs/ftp/graph.pdf)
We consider a method for estimating a covariance matrix on the basis of a sample of vectors
drawn from a multivariate normal distribution. In particular, we penalize the likelihood with a
lasso penalty on the entries of the covariance matrix. This penalty plays two important roles: it
reduces the effective number of parameters, which is important even when the dimension of the
vectors is smaller than the sample size since the number of parameters grows quadratically in the
number of variables, and it produces an estimate which is sparse.
In the words of Dempster (1972),
“The computational ease with which this abundance of parameters can be estimated
should not be allowed to obscure the probable unwisdom of such estimation from
limited data.”(words above from http://faculty.bscb.cornell.edu/~bien/papers/biometrika2011spcov.pdf)
If the ijth component of inverse covariance matrix is zero, then variables i and j are conditionally
independent, given the other variables.(from above paper 1)
Zeros in the covariance matrix correspond to marginal independencies.(from above paper 2)
from sklearn.covariance import GraphLasso
Sparse inverse covariance estimation with an l1-penalized estimator.
The keys of this question are coordinate descent and LARS.