sklearn.linear_model
.PassiveAggressiveRegressor¶

class
sklearn.linear_model.
PassiveAggressiveRegressor
(C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss='epsilon_insensitive', epsilon=0.1, random_state=None, warm_start=False, average=False)[source]¶ Passive Aggressive Regressor
Read more in the User Guide.
 Parameters
 Cfloat
Maximum step size (regularization). Defaults to 1.0.
 fit_interceptbool
Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. Defaults to True.
 max_iterint, optional (default=1000)
The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the
fit
method, and not thepartial_fit
method.New in version 0.19.
 tolfloat or None, optional (default=1e3)
The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss  tol).
New in version 0.19.
 early_stoppingbool, default=False
Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs.
New in version 0.20.
 validation_fractionfloat, default=0.1
The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True.
New in version 0.20.
 n_iter_no_changeint, default=5
Number of iterations with no improvement to wait before early stopping.
New in version 0.20.
 shufflebool, default=True
Whether or not the training data should be shuffled after each epoch.
 verboseinteger, optional
The verbosity level
 lossstring, optional
The loss function to be used: epsilon_insensitive: equivalent to PAI in the reference paper. squared_epsilon_insensitive: equivalent to PAII in the reference paper.
 epsilonfloat
If the difference between the current prediction and the correct label is below this threshold, the model is not updated.
 random_stateint, RandomState instance or None, optional, default=None
The seed of the pseudo random number generator to use when shuffling the data. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by
np.random
. warm_startbool, optional
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.
Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled.
 averagebool or int, optional
When set to True, computes the averaged SGD weights and stores the result in the
coef_
attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples.New in version 0.19: parameter average to use weights averaging in SGD
 Attributes
 coef_array, shape = [1, n_features] if n_classes == 2 else [n_classes, n_features]
Weights assigned to the features.
 intercept_array, shape = [1] if n_classes == 2 else [n_classes]
Constants in decision function.
 n_iter_int
The actual number of iterations to reach the stopping criterion.
 t_int
Number of weight updates performed during training. Same as
(n_iter_ * n_samples)
.
See also
References
Online PassiveAggressive Algorithms <http://jmlr.csail.mit.edu/papers/volume7/crammer06a/crammer06a.pdf> K. Crammer, O. Dekel, J. Keshat, S. ShalevShwartz, Y. Singer  JMLR (2006)
Examples
>>> from sklearn.linear_model import PassiveAggressiveRegressor >>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_features=4, random_state=0) >>> regr = PassiveAggressiveRegressor(max_iter=100, random_state=0, ... tol=1e3) >>> regr.fit(X, y) PassiveAggressiveRegressor(max_iter=100, random_state=0) >>> print(regr.coef_) [20.48736655 34.18818427 67.59122734 87.94731329] >>> print(regr.intercept_) [0.02306214] >>> print(regr.predict([[0, 0, 0, 0]])) [0.02306214]
Methods
densify
(self)Convert coefficient matrix to dense array format.
fit
(self, X, y[, coef_init, intercept_init])Fit linear model with Passive Aggressive algorithm.
get_params
(self[, deep])Get parameters for this estimator.
partial_fit
(self, X, y)Fit linear model with Passive Aggressive algorithm.
predict
(self, X)Predict using the linear model
score
(self, X, y[, sample_weight])Returns the coefficient of determination R^2 of the prediction.
set_params
(self, \*args, \*\*kwargs)Set the parameters of this estimator.
sparsify
(self)Convert coefficient matrix to sparse format.

__init__
(self, C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss='epsilon_insensitive', epsilon=0.1, random_state=None, warm_start=False, average=False)[source]¶ Initialize self. See help(type(self)) for accurate signature.

densify
(self)[source]¶ Convert coefficient matrix to dense array format.
Converts the
coef_
member (back) to a numpy.ndarray. This is the default format ofcoef_
and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a noop. Returns
 selfestimator

fit
(self, X, y, coef_init=None, intercept_init=None)[source]¶ Fit linear model with Passive Aggressive algorithm.
 Parameters
 X{arraylike, sparse matrix}, shape = [n_samples, n_features]
Training data
 ynumpy array of shape [n_samples]
Target values
 coef_initarray, shape = [n_features]
The initial coefficients to warmstart the optimization.
 intercept_initarray, shape = [1]
The initial intercept to warmstart the optimization.
 Returns
 selfreturns an instance of self.

get_params
(self, deep=True)[source]¶ Get parameters for this estimator.
 Parameters
 deepboolean, optional
If True, will return the parameters for this estimator and contained subobjects that are estimators.
 Returns
 paramsmapping of string to any
Parameter names mapped to their values.

partial_fit
(self, X, y)[source]¶ Fit linear model with Passive Aggressive algorithm.
 Parameters
 X{arraylike, sparse matrix}, shape = [n_samples, n_features]
Subset of training data
 ynumpy array of shape [n_samples]
Subset of target values
 Returns
 selfreturns an instance of self.

predict
(self, X)[source]¶ Predict using the linear model
 Parameters
 X{arraylike, sparse matrix}, shape (n_samples, n_features)
 Returns
 array, shape (n_samples,)
Predicted target values per element in X.

score
(self, X, y, sample_weight=None)[source]¶ Returns the coefficient of determination R^2 of the prediction.
The coefficient R^2 is defined as (1  u/v), where u is the residual sum of squares ((y_true  y_pred) ** 2).sum() and v is the total sum of squares ((y_true  y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.
 Parameters
 Xarraylike, shape = (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix instead, shape = (n_samples, n_samples_fitted], where n_samples_fitted is the number of samples used in the fitting for the estimator.
 yarraylike, shape = (n_samples) or (n_samples, n_outputs)
True values for X.
 sample_weightarraylike, shape = [n_samples], optional
Sample weights.
 Returns
 scorefloat
R^2 of self.predict(X) wrt. y.
Notes
The R2 score used when calling
score
on a regressor will usemultioutput='uniform_average'
from version 0.23 to keep consistent withr2_score
. This will influence thescore
method of all the multioutput regressors (except forMultiOutputRegressor
). To specify the default value manually and avoid the warning, please either callr2_score
directly or make a custom scorer withmake_scorer
(the builtin scorer'r2'
usesmultioutput='uniform_average'
).

set_params
(self, *args, **kwargs)[source]¶ Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object. Returns
 self

sparsify
(self)[source]¶ Convert coefficient matrix to sparse format.
Converts the
coef_
member to a scipy.sparse matrix, which for L1regularized models can be much more memory and storageefficient than the usual numpy.ndarray representation.The
intercept_
member is not converted. Returns
 selfestimator
Notes
For nonsparse models, i.e. when there are not many zeros in
coef_
, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with(coef_ == 0).sum()
, must be more than 50% for this to provide significant benefits.After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.