sklearn.linear_model.ridge_regression¶
- 
sklearn.linear_model.ridge_regression(X, y, alpha, *, sample_weight=None, solver='auto', max_iter=None, tol=0.001, verbose=0, random_state=None, return_n_iter=False, return_intercept=False, check_input=True)[source]¶
- Solve the ridge equation by the method of normal equations. - Read more in the User Guide. - Parameters
- X{ndarray, sparse matrix, LinearOperator} of shape (n_samples, n_features)
- Training data 
- yndarray of shape (n_samples,) or (n_samples, n_targets)
- Target values 
- alphafloat or array-like of shape (n_targets,)
- Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to - 1 / (2C)in other linear models such as- LogisticRegressionor- LinearSVC. If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number.
- sample_weightfloat or array-like of shape (n_samples,), default=None
- Individual weights for each sample. If given a float, every sample will have the same weight. If sample_weight is not None and solver=’auto’, the solver will be set to ‘cholesky’. - New in version 0.17. 
- solver{‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}, default=’auto’
- Solver to use in the computational routines: - ‘auto’ chooses the solver automatically based on the type of data. 
- ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. More stable for singular matrices than ‘cholesky’. 
- ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution via a Cholesky decomposition of dot(X.T, X) 
- ‘sparse_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set - toland- max_iter).
- ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure. 
- ‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its improved, unbiased version named SAGA. Both methods also use an iterative procedure, and are often faster than other solvers when both n_samples and n_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. 
 - All last five solvers support both dense and sparse data. However, only ‘sag’ and ‘sparse_cg’ supports sparse input when - fit_interceptis True.- New in version 0.17: Stochastic Average Gradient descent solver. - New in version 0.19: SAGA solver. 
- max_iterint, default=None
- Maximum number of iterations for conjugate gradient solver. For the ‘sparse_cg’ and ‘lsqr’ solvers, the default value is determined by scipy.sparse.linalg. For ‘sag’ and saga solver, the default value is 1000. 
- tolfloat, default=1e-3
- Precision of the solution. 
- verboseint, default=0
- Verbosity level. Setting verbose > 0 will display additional information depending on the solver used. 
- random_stateint, RandomState instance, default=None
- Used when - solver== ‘sag’ or ‘saga’ to shuffle the data. See Glossary for details.
- return_n_iterbool, default=False
- If True, the method also returns - n_iter, the actual number of iteration performed by the solver.- New in version 0.17. 
- return_interceptbool, default=False
- If True and if X is sparse, the method also returns the intercept, and the solver is automatically changed to ‘sag’. This is only a temporary fix for fitting the intercept with sparse data. For dense data, use sklearn.linear_model._preprocess_data before your regression. - New in version 0.17. 
- check_inputbool, default=True
- If False, the input arrays X and y will not be checked. - New in version 0.21. 
 
- Returns
- coefndarray of shape (n_features,) or (n_targets, n_features)
- Weight vector(s). 
- n_iterint, optional
- The actual number of iteration performed by the solver. Only returned if - return_n_iteris True.
- interceptfloat or ndarray of shape (n_targets,)
- The intercept of the model. Only returned if - return_interceptis True and if X is a scipy sparse array.
 
 - Notes - This function won’t compute the intercept. 
