sklearn.linear_model
.Ridge¶
-
class
sklearn.linear_model.
Ridge
(alpha=1.0, *, fit_intercept=True, normalize=False, copy_X=True, max_iter=None, tol=0.001, solver='auto', random_state=None)[source]¶ Linear least squares with l2 regularization.
Minimizes the objective function:
||y - Xw||^2_2 + alpha * ||w||^2_2
This model solves a regression model where the loss function is the linear least squares function and regularization is given by the l2-norm. Also known as Ridge Regression or Tikhonov regularization. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape (n_samples, n_targets)).
Read more in the User Guide.
- Parameters
- alpha{float, ndarray of shape (n_targets,)}, default=1.0
Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to
1 / (2C)
in other linear models such asLogisticRegression
orLinearSVC
. If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number.- fit_interceptbool, default=True
Whether to fit the intercept for this model. If set to false, no intercept will be used in calculations (i.e.
X
andy
are expected to be centered).- normalizebool, default=False
This parameter is ignored when
fit_intercept
is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please useStandardScaler
before callingfit
on an estimator withnormalize=False
.- copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
- max_iterint, default=None
Maximum number of iterations for conjugate gradient solver. For ‘sparse_cg’ and ‘lsqr’ solvers, the default value is determined by scipy.sparse.linalg. For ‘sag’ solver, the default value is 1000.
- tolfloat, default=1e-3
Precision of the solution.
- solver{‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}, default=’auto’
Solver to use in the computational routines:
‘auto’ chooses the solver automatically based on the type of data.
‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. More stable for singular matrices than ‘cholesky’.
‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution.
‘sparse_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set
tol
andmax_iter
).‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure.
‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its improved, unbiased version named SAGA. Both methods also use an iterative procedure, and are often faster than other solvers when both n_samples and n_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing.
All last five solvers support both dense and sparse data. However, only ‘sag’ and ‘sparse_cg’ supports sparse input when
fit_intercept
is True.New in version 0.17: Stochastic Average Gradient descent solver.
New in version 0.19: SAGA solver.
- random_stateint, RandomState instance, default=None
Used when
solver
== ‘sag’ or ‘saga’ to shuffle the data. See Glossary for details.New in version 0.17:
random_state
to support Stochastic Average Gradient.
- Attributes
- coef_ndarray of shape (n_features,) or (n_targets, n_features)
Weight vector(s).
- intercept_float or ndarray of shape (n_targets,)
Independent term in decision function. Set to 0.0 if
fit_intercept = False
.- n_iter_None or ndarray of shape (n_targets,)
Actual number of iterations for each target. Available only for sag and lsqr solvers. Other solvers will return None.
New in version 0.17.
See also
RidgeClassifier
Ridge classifier.
RidgeCV
Ridge regression with built-in cross validation.
KernelRidge
Kernel ridge regression combines ridge regression with the kernel trick.
Examples
>>> from sklearn.linear_model import Ridge >>> import numpy as np >>> n_samples, n_features = 10, 5 >>> rng = np.random.RandomState(0) >>> y = rng.randn(n_samples) >>> X = rng.randn(n_samples, n_features) >>> clf = Ridge(alpha=1.0) >>> clf.fit(X, y) Ridge()
Methods
fit
(X, y[, sample_weight])Fit Ridge regression model.
get_params
([deep])Get parameters for this estimator.
predict
(X)Predict using the linear model.
score
(X, y[, sample_weight])Return the coefficient of determination \(R^2\) of the prediction.
set_params
(**params)Set the parameters of this estimator.
-
fit
(X, y, sample_weight=None)[source]¶ Fit Ridge regression model.
- Parameters
- X{ndarray, sparse matrix} of shape (n_samples, n_features)
Training data
- yndarray of shape (n_samples,) or (n_samples, n_targets)
Target values
- sample_weightfloat or ndarray of shape (n_samples,), default=None
Individual weights for each sample. If given a float, every sample will have the same weight.
- Returns
- selfreturns an instance of self.
-
get_params
(deep=True)[source]¶ Get parameters for this estimator.
- Parameters
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
- paramsdict
Parameter names mapped to their values.
-
predict
(X)[source]¶ Predict using the linear model.
- Parameters
- Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples.
- Returns
- Carray, shape (n_samples,)
Returns predicted values.
-
score
(X, y, sample_weight=None)[source]¶ Return the coefficient of determination \(R^2\) of the prediction.
The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares
((y_true - y_pred) ** 2).sum()
and \(v\) is the total sum of squares((y_true - y_true.mean()) ** 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value ofy
, disregarding the input features, would get a \(R^2\) score of 0.0.- Parameters
- Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for
X
.- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
- Returns
- scorefloat
\(R^2\) of
self.predict(X)
wrt.y
.
Notes
The \(R^2\) score used when calling
score
on a regressor usesmultioutput='uniform_average'
from version 0.23 to keep consistent with default value ofr2_score
. This influences thescore
method of all the multioutput regressors (except forMultiOutputRegressor
).
-
set_params
(**params)[source]¶ Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
- **paramsdict
Estimator parameters.
- Returns
- selfestimator instance
Estimator instance.