3.2.4.1.10. sklearn.linear_model
.RidgeClassifierCV¶

class
sklearn.linear_model.
RidgeClassifierCV
(alphas=(0.1, 1.0, 10.0), fit_intercept=True, normalize=False, scoring=None, cv=None, class_weight=None, store_cv_values=False)[source]¶ Ridge classifier with builtin crossvalidation.
See glossary entry for crossvalidation estimator.
By default, it performs Generalized CrossValidation, which is a form of efficient LeaveOneOut crossvalidation. Currently, only the n_features > n_samples case is handled efficiently.
Read more in the User Guide.
Parameters:  alphas : numpy array of shape [n_alphas]
Array of alpha values to try. Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to
C^1
in other linear models such as LogisticRegression or LinearSVC. fit_intercept : boolean
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).
 normalize : boolean, optional, default False
This parameter is ignored when
fit_intercept
is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2norm. If you wish to standardize, please usesklearn.preprocessing.StandardScaler
before callingfit
on an estimator withnormalize=False
. scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or a scorer callable object / function with signature
scorer(estimator, X, y)
. cv : int, crossvalidation generator or an iterable, optional
Determines the crossvalidation splitting strategy. Possible inputs for cv are:
 None, to use the efficient LeaveOneOut crossvalidation
 integer, to specify the number of folds.
 CV splitter,
 An iterable yielding (train, test) splits as arrays of indices.
Refer User Guide for the various crossvalidation strategies that can be used here.
 class_weight : dict or ‘balanced’, optional
Weights associated with classes in the form
{class_label: weight}
. If not given, all classes are supposed to have weight one.The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as
n_samples / (n_classes * np.bincount(y))
 store_cv_values : boolean, default=False
Flag indicating if the crossvalidation values corresponding to each alpha should be stored in the
cv_values_
attribute (see below). This flag is only compatible withcv=None
(i.e. using Generalized CrossValidation).
Attributes:  cv_values_ : array, shape = [n_samples, n_targets, n_alphas], optional
Crossvalidation values for each alpha (if
store_cv_values=True
andcv=None
). Afterfit()
has been called, this attribute will contain the mean squared errors (by default) or the values of the{loss,score}_func
function (if provided in the constructor). coef_ : array, shape = [n_features] or [n_targets, n_features]
Weight vector(s).
 intercept_ : float  array, shape = (n_targets,)
Independent term in decision function. Set to 0.0 if
fit_intercept = False
. alpha_ : float
Estimated regularization parameter
See also
Ridge
 Ridge regression
RidgeClassifier
 Ridge classifier
RidgeCV
 Ridge regression with builtin cross validation
Notes
For multiclass classification, n_class classifiers are trained in a oneversusall approach. Concretely, this is implemented by taking advantage of the multivariate response support in Ridge.
Examples
>>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import RidgeClassifierCV >>> X, y = load_breast_cancer(return_X_y=True) >>> clf = RidgeClassifierCV(alphas=[1e3, 1e2, 1e1, 1]).fit(X, y) >>> clf.score(X, y) 0.9630...
Methods
decision_function
(X)Predict confidence scores for samples. fit
(X, y[, sample_weight])Fit the ridge classifier. get_params
([deep])Get parameters for this estimator. predict
(X)Predict class labels for samples in X. score
(X, y[, sample_weight])Returns the mean accuracy on the given test data and labels. set_params
(**params)Set the parameters of this estimator. 
__init__
(alphas=(0.1, 1.0, 10.0), fit_intercept=True, normalize=False, scoring=None, cv=None, class_weight=None, store_cv_values=False)[source]¶

decision_function
(X)[source]¶ Predict confidence scores for samples.
The confidence score for a sample is the signed distance of that sample to the hyperplane.
Parameters:  X : array_like or sparse matrix, shape (n_samples, n_features)
Samples.
Returns:  array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.

fit
(X, y, sample_weight=None)[source]¶ Fit the ridge classifier.
Parameters:  X : arraylike, shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
 y : arraylike, shape (n_samples,)
Target values. Will be cast to X’s dtype if necessary
 sample_weight : float or numpy array of shape (n_samples,)
Sample weight.
Returns:  self : object

get_params
(deep=True)[source]¶ Get parameters for this estimator.
Parameters:  deep : boolean, optional
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:  params : mapping of string to any
Parameter names mapped to their values.

predict
(X)[source]¶ Predict class labels for samples in X.
Parameters:  X : array_like or sparse matrix, shape (n_samples, n_features)
Samples.
Returns:  C : array, shape [n_samples]
Predicted class label per sample.

score
(X, y, sample_weight=None)[source]¶ Returns the mean accuracy on the given test data and labels.
In multilabel classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:  X : arraylike, shape = (n_samples, n_features)
Test samples.
 y : arraylike, shape = (n_samples) or (n_samples, n_outputs)
True labels for X.
 sample_weight : arraylike, shape = [n_samples], optional
Sample weights.
Returns:  score : float
Mean accuracy of self.predict(X) wrt. y.

set_params
(**params)[source]¶ Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.Returns:  self