sklearn.linear_model
.ridge_regression¶
-
sklearn.linear_model.
ridge_regression
(X, y, alpha, sample_weight=None, solver=’auto’, max_iter=None, tol=0.001, verbose=0, random_state=None, return_n_iter=False, return_intercept=False, check_input=True)[source]¶ Solve the ridge equation by the method of normal equations.
Read more in the User Guide.
Parameters: - X : {array-like, sparse matrix, LinearOperator},
shape = [n_samples, n_features] Training data
- y : array-like, shape = [n_samples] or [n_samples, n_targets]
Target values
- alpha : {float, array-like},
shape = [n_targets] if array-like Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to
C^-1
in other linear models such as LogisticRegression or LinearSVC. If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number.- sample_weight : float or numpy array of shape [n_samples]
Individual weights for each sample. If sample_weight is not None and solver=’auto’, the solver will be set to ‘cholesky’.
New in version 0.17.
- solver : {‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}
Solver to use in the computational routines:
- ‘auto’ chooses the solver automatically based on the type of data.
- ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. More stable for singular matrices than ‘cholesky’.
- ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution via a Cholesky decomposition of dot(X.T, X)
- ‘sparse_cg’ uses the conjugate gradient solver as found in
scipy.sparse.linalg.cg. As an iterative algorithm, this solver is
more appropriate than ‘cholesky’ for large-scale data
(possibility to set
tol
and max_iter). - ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure.
- ‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its improved, unbiased version named SAGA. Both methods also use an iterative procedure, and are often faster than other solvers when both n_samples and n_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing.
All last five solvers support both dense and sparse data. However, only ‘sag’ and ‘sparse_cg’ supports sparse input when`fit_intercept` is True.
New in version 0.17: Stochastic Average Gradient descent solver.
New in version 0.19: SAGA solver.
- max_iter : int, optional
Maximum number of iterations for conjugate gradient solver. For the ‘sparse_cg’ and ‘lsqr’ solvers, the default value is determined by scipy.sparse.linalg. For ‘sag’ and saga solver, the default value is 1000.
- tol : float
Precision of the solution.
- verbose : int
Verbosity level. Setting verbose > 0 will display additional information depending on the solver used.
- random_state : int, RandomState instance or None, optional, default None
The seed of the pseudo random number generator to use when shuffling the data. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by
np.random
. Used whensolver
== ‘sag’.- return_n_iter : boolean, default False
If True, the method also returns
n_iter
, the actual number of iteration performed by the solver.New in version 0.17.
- return_intercept : boolean, default False
If True and if X is sparse, the method also returns the intercept, and the solver is automatically changed to ‘sag’. This is only a temporary fix for fitting the intercept with sparse data. For dense data, use sklearn.linear_model._preprocess_data before your regression.
New in version 0.17.
- check_input : boolean, default True
If False, the input arrays X and y will not be checked.
New in version 0.21.
Returns: - coef : array, shape = [n_features] or [n_targets, n_features]
Weight vector(s).
- n_iter : int, optional
The actual number of iteration performed by the solver. Only returned if
return_n_iter
is True.- intercept : float or array, shape = [n_targets]
The intercept of the model. Only returned if
return_intercept
is True and if X is a scipy sparse array.
Notes
This function won’t compute the intercept.