sklearn.linear_model
.logistic_regression_path¶

sklearn.linear_model.
logistic_regression_path
(X, y, pos_class=None, Cs=10, fit_intercept=True, max_iter=100, tol=0.0001, verbose=0, solver=’lbfgs’, coef=None, class_weight=None, dual=False, penalty=’l2’, intercept_scaling=1.0, multi_class=’warn’, random_state=None, check_input=True, max_squared_sum=None, sample_weight=None, l1_ratio=None)[source]¶ Compute a Logistic Regression model for a list of regularization parameters.
This is an implementation that uses the result of the previous model to speed up computations along the set of solutions, making it faster than sequentially calling LogisticRegression for the different parameters. Note that there will be no speedup with liblinear solver, since it does not handle warmstarting.
Read more in the User Guide.
Parameters:  X : arraylike or sparse matrix, shape (n_samples, n_features)
Input data.
 y : arraylike, shape (n_samples,) or (n_samples, n_targets)
Input data, target values.
 pos_class : int, None
The class with respect to which we perform a onevsall fit. If None, then it is assumed that the given problem is binary.
 Cs : int  arraylike, shape (n_cs,)
List of values for the regularization parameter or integer specifying the number of regularization parameters that should be used. In this case, the parameters will be chosen in a logarithmic scale between 1e4 and 1e4.
 fit_intercept : bool
Whether to fit an intercept for the model. In this case the shape of the returned array is (n_cs, n_features + 1).
 max_iter : int
Maximum number of iterations for the solver.
 tol : float
Stopping criterion. For the newtoncg and lbfgs solvers, the iteration will stop when
max{g_i  i = 1, ..., n} <= tol
whereg_i
is the ith component of the gradient. verbose : int
For the liblinear and lbfgs solvers set verbose to any positive number for verbosity.
 solver : {‘lbfgs’, ‘newtoncg’, ‘liblinear’, ‘sag’, ‘saga’}
Numerical solver to use.
 coef : arraylike, shape (n_features,), default None
Initialization value for coefficients of logistic regression. Useless for liblinear solver.
 class_weight : dict or ‘balanced’, optional
Weights associated with classes in the form
{class_label: weight}
. If not given, all classes are supposed to have weight one.The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as
n_samples / (n_classes * np.bincount(y))
.Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.
 dual : bool
Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features.
 penalty : str, ‘l1’, ‘l2’, or ‘elasticnet’
Used to specify the norm used in the penalization. The ‘newtoncg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. ‘elasticnet’ is only supported by the ‘saga’ solver.
 intercept_scaling : float, default 1.
Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes
intercept_scaling * synthetic_feature_weight
.Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.
 multi_class : str, {‘ovr’, ‘multinomial’, ‘auto’}, default: ‘ovr’
If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘multinomial’ is unavailable when solver=’liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=’liblinear’, and otherwise selects ‘multinomial’.
New in version 0.18: Stochastic Average Gradient descent solver for ‘multinomial’ case.
Changed in version 0.20: Default will change from ‘ovr’ to ‘auto’ in 0.22.
 random_state : int, RandomState instance or None, optional, default None
The seed of the pseudo random number generator to use when shuffling the data. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by
np.random
. Used whensolver
== ‘sag’ or ‘liblinear’. check_input : bool, default True
If False, the input arrays X and y will not be checked.
 max_squared_sum : float, default None
Maximum squared sum of X over samples. Used only in SAG solver. If None, it will be computed, going through all the samples. The value should be precomputed to speed up cross validation.
 sample_weight : arraylike, shape(n_samples,) optional
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight.
 l1_ratio : float or None, optional (default=None)
The ElasticNet mixing parameter, with
0 <= l1_ratio <= 1
. Only used ifpenalty='elasticnet'
. Settingl1_ratio=0
is equivalent to usingpenalty='l2'
, while settingl1_ratio=1
is equivalent to usingpenalty='l1'
. For0 < l1_ratio <1
, the penalty is a combination of L1 and L2.
Returns:  coefs : ndarray, shape (n_cs, n_features) or (n_cs, n_features + 1)
List of coefficients for the Logistic Regression model. If fit_intercept is set to True then the second dimension will be n_features + 1, where the last item represents the intercept. For
multiclass='multinomial'
, the shape is (n_classes, n_cs, n_features) or (n_classes, n_cs, n_features + 1). Cs : ndarray
Grid of Cs used for crossvalidation.
 n_iter : array, shape (n_cs,)
Actual number of iteration for each Cs.
Notes
You might get slightly different results with the solver liblinear than with the others since this uses LIBLINEAR which penalizes the intercept.
Changed in version 0.19: The “copy” parameter was removed.