sklearn.covariance.graphical_lasso

sklearn.covariance.graphical_lasso(emp_cov, alpha, *, cov_init=None, mode='cd', tol=0.0001, enet_tol=0.0001, max_iter=100, verbose=False, return_costs=False, eps=2.220446049250313e-16, return_n_iter=False)[source]

L1-penalized covariance estimator.

Read more in the User Guide.

Changed in version v0.20: graph_lasso has been renamed to graphical_lasso

Parameters:
emp_covarray-like of shape (n_features, n_features)

Empirical covariance from which to compute the covariance estimate.

alphafloat

The regularization parameter: the higher alpha, the more regularization, the sparser the inverse covariance. Range is (0, inf].

cov_initarray of shape (n_features, n_features), default=None

The initial guess for the covariance. If None, then the empirical covariance is used.

Deprecated since version 1.3: cov_init is deprecated in 1.3 and will be removed in 1.5. It currently has no effect.

mode{‘cd’, ‘lars’}, default=’cd’

The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where p > n. Elsewhere prefer cd which is more numerically stable.

tolfloat, default=1e-4

The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped. Range is (0, inf].

enet_tolfloat, default=1e-4

The tolerance for the elastic net solver used to calculate the descent direction. This parameter controls the accuracy of the search direction for a given column update, not of the overall parameter estimate. Only used for mode=’cd’. Range is (0, inf].

max_iterint, default=100

The maximum number of iterations.

verbosebool, default=False

If verbose is True, the objective function and dual gap are printed at each iteration.

return_costsbool, default=False

If return_costs is True, the objective function and dual gap at each iteration are returned.

epsfloat, default=eps

The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Default is np.finfo(np.float64).eps.

return_n_iterbool, default=False

Whether or not to return the number of iterations.

Returns:
covariancendarray of shape (n_features, n_features)

The estimated covariance matrix.

precisionndarray of shape (n_features, n_features)

The estimated (sparse) precision matrix.

costslist of (objective, dual_gap) pairs

The list of values of the objective function and the dual gap at each iteration. Returned only if return_costs is True.

n_iterint

Number of iterations. Returned only if return_n_iter is set to True.

See also

GraphicalLasso

Sparse inverse covariance estimation with an l1-penalized estimator.

GraphicalLassoCV

Sparse inverse covariance with cross-validated choice of the l1 penalty.

Notes

The algorithm employed to solve this problem is the GLasso algorithm, from the Friedman 2008 Biostatistics paper. It is the same algorithm as in the R glasso package.

One possible difference with the glasso R package is that the diagonal coefficients are not penalized.

Examples

>>> import numpy as np
>>> from sklearn.datasets import make_sparse_spd_matrix
>>> from sklearn.covariance import empirical_covariance, graphical_lasso
>>> true_cov = make_sparse_spd_matrix(n_dim=3,random_state=42)
>>> rng = np.random.RandomState(42)
>>> X = rng.multivariate_normal(mean=np.zeros(3), cov=true_cov, size=3)
>>> emp_cov = empirical_covariance(X, assume_centered=True)
>>> emp_cov, _ = graphical_lasso(emp_cov, alpha=0.05)
>>> emp_cov
array([[ 1.68...,  0.21..., -0.20...],
       [ 0.21...,  0.22..., -0.08...],
       [-0.20..., -0.08...,  0.23...]])