Version 0.21.3¶
Legend for changelogs¶
Major Feature : something big that you couldn’t do before.
Feature : something that you couldn’t do before.
Efficiency : an existing feature now may not require as much computation or memory.
Enhancement : a miscellaneous minor improvement.
Fix : something that previously didn’t work as documentated – or according to reasonable expectations – should now work.
API Change : you will need to change your code to have the same effect in the future; or a feature will be removed in the future.
July 30, 2019
Changed models¶
The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures.
The v0.20.0 release notes failed to mention a backwards incompatibility in
metrics.make_scorer
whenneeds_proba=True
andy_true
is binary. Now, the scorer function is supposed to accept a 1Dy_pred
(i.e., probability of the positive class, shape(n_samples,)
), instead of a 2Dy_pred
(i.e., shape(n_samples, 2)
).
Changelog¶
sklearn.cluster
¶
Fix Fixed a bug in
cluster.KMeans
where computation withinit='random'
was single threaded forn_jobs > 1
orn_jobs = -1
. #12955 by Prabakaran Kumaresshan.Fix Fixed a bug in
cluster.OPTICS
where users were unable to pass floatmin_samples
andmin_cluster_size
. #14496 by Fabian Klopfer and Hanmin Qin.Fix Fixed a bug in
cluster.KMeans
where KMeans++ initialisation could rarely result in an IndexError. #11756 by Joel Nothman.
sklearn.compose
¶
Fix Fixed an issue in
compose.ColumnTransformer
where using DataFrames whose column order differs between :func:fit
and :func:transform
could lead to silently passing incorrect columns to theremainder
transformer. #14237 byAndreas Schuderer <schuderer>
.
sklearn.datasets
¶
Fix
datasets.fetch_california_housing
,datasets.fetch_covtype
,datasets.fetch_kddcup99
,datasets.fetch_olivetti_faces
,datasets.fetch_rcv1
, anddatasets.fetch_species_distributions
try to persist the previously cache using the newjoblib
if the cached data was persisted using the deprecatedsklearn.externals.joblib
. This behavior is set to be deprecated and removed in v0.23. #14197 by Adrin Jalali.
sklearn.ensemble
¶
Fix Fix zero division error in
ensemble.HistGradientBoostingClassifier
andensemble.HistGradientBoostingRegressor
. #14024 byNicolas Hug <NicolasHug>
.
sklearn.impute
¶
Fix Fixed a bug in
impute.SimpleImputer
andimpute.IterativeImputer
so that no errors are thrown when there are missing values in training data. #13974 byFrank Hoang <fhoang7>
.
sklearn.inspection
¶
Fix Fixed a bug in
inspection.plot_partial_dependence
wheretarget
parameter was not being taken into account for multiclass problems. #14393 by Guillem G. Subies.
sklearn.linear_model
¶
Fix Fixed a bug in
linear_model.LogisticRegressionCV
whererefit=False
would fail depending on the'multiclass'
and'penalty'
parameters (regression introduced in 0.21). #14087 by Nicolas Hug.Fix Compatibility fix for
linear_model.ARDRegression
and Scipy>=1.3.0. Adapts to upstream changes to the defaultpinvh
cutoff threshold which otherwise results in poor accuracy in some cases. #14067 by Tim Staley.
sklearn.neighbors
¶
Fix Fixed a bug in
neighbors.NeighborhoodComponentsAnalysis
where the validation of initial parametersn_components
,max_iter
andtol
required too strict types. #14092 by Jérémie du Boisberranger.
sklearn.tree
¶
Fix Fixed bug in
tree.export_text
when the tree has one feature and a single feature name is passed in. #14053 byThomas Fan
.Fix Fixed an issue with
plot_tree
where it displayed entropy calculations even forgini
criterion in DecisionTreeClassifiers. #13947 by Frank Hoang.
Version 0.21.2¶
24 May 2019
Changelog¶
sklearn.decomposition
¶
Fix Fixed a bug in
cross_decomposition.CCA
improving numerical stability whenY
is close to zero. #13903 by Thomas Fan.
sklearn.metrics
¶
Fix Fixed a bug in
metrics.pairwise.euclidean_distances
where a part of the distance matrix was left un-instanciated for sufficiently large float32 datasets (regression introduced in 0.21). #13910 by Jérémie du Boisberranger.
sklearn.preprocessing
¶
Fix Fixed a bug in
preprocessing.OneHotEncoder
where the newdrop
parameter was not reflected inget_feature_names
. #13894 by James Myatt.
sklearn.utils.sparsefuncs
¶
Fix Fixed a bug where
min_max_axis
would fail on 32-bit systems for certain large inputs. This affectspreprocessing.MaxAbsScaler
,preprocessing.normalize
andpreprocessing.LabelBinarizer
. #13741 by Roddy MacSween.
Version 0.21.1¶
17 May 2019
This is a bug-fix release to primarily resolve some packaging issues in version 0.21.0. It also includes minor documentation improvements and some bug fixes.
Changelog¶
sklearn.inspection
¶
Fix Fixed a bug in
inspection.partial_dependence
to only check classifier and not regressor for the multiclass-multioutput case. #14309 by Guillaume Lemaitre.
sklearn.metrics
¶
Fix Fixed a bug in
metrics.pairwise_distances
where it would raiseAttributeError
for boolean metrics whenX
had a boolean dtype andY == None
. #13864 by Paresh Mathur.Fix Fixed two bugs in
metrics.pairwise_distances
whenn_jobs > 1
. First it used to return a distance matrix with same dtype as input, even for integer dtype. Then the diagonal was not zeros for euclidean metric whenY
isX
. #13877 by Jérémie du Boisberranger.
sklearn.neighbors
¶
Fix Fixed a bug in
neighbors.KernelDensity
which could not be restored from a pickle ifsample_weight
had been used. #13772 by Aditya Vyas.
Version 0.21.0¶
May 2019
Changed models¶
The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures.
discriminant_analysis.LinearDiscriminantAnalysis
for multiclass classification. Fixdiscriminant_analysis.LinearDiscriminantAnalysis
with ‘eigen’ solver. FixDecision trees and derived ensembles when both
max_depth
andmax_leaf_nodes
are set. Fixlinear_model.LogisticRegression
andlinear_model.LogisticRegressionCV
with ‘saga’ solver. Fixsklearn.feature_extraction.text.HashingVectorizer
,sklearn.feature_extraction.text.TfidfVectorizer
, andsklearn.feature_extraction.text.CountVectorizer
Fixsvm.SVC.decision_function
andmulticlass.OneVsOneClassifier.decision_function
. Fixlinear_model.SGDClassifier
and any derived classifiers. FixAny model using the
linear_model._sag.sag_solver
function with a0
seed, includinglinear_model.LogisticRegression
,linear_model.LogisticRegressionCV
,linear_model.Ridge
, andlinear_model.RidgeCV
with ‘sag’ solver. Fixlinear_model.RidgeCV
when using leave-one-out cross-validation with sparse inputs. Fix
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we cannot assure that this list is complete.)
Known Major Bugs¶
The default
max_iter
forlinear_model.LogisticRegression
is too small for many solvers given the defaulttol
. In particular, we accidentally changed the defaultmax_iter
for the liblinear solver from 1000 to 100 iterations in #3591 released in version 0.16. In a future release we hope to choose better defaultmax_iter
andtol
heuristically depending on the solver (see #13317).
Changelog¶
Support for Python 3.4 and below has been officially dropped.
sklearn.base
¶
API Change The R2 score used when calling
score
on a regressor will usemultioutput='uniform_average'
from version 0.23 to keep consistent withmetrics.r2_score
. This will influence thescore
method of all the multioutput regressors (except formultioutput.MultiOutputRegressor
). #13157 by Hanmin Qin.
sklearn.calibration
¶
Enhancement Added support to bin the data passed into
calibration.calibration_curve
by quantiles instead of uniformly between 0 and 1. #13086 by Scott Cole.Enhancement Allow n-dimensional arrays as input for
calibration.CalibratedClassifierCV
. #13485 by William de Vazelhes.
sklearn.cluster
¶
Major Feature A new clustering algorithm:
cluster.OPTICS
: an algorithm related tocluster.DBSCAN
, that has hyperparameters easier to set and that scales better, by Shane, Adrin Jalali, Erich Schubert, Hanmin Qin, and Assia Benbihi.Fix Fixed a bug where
cluster.Birch
could occasionally raise an AttributeError. #13651 by Joel Nothman.Fix Fixed a bug in
cluster.KMeans
where empty clusters weren’t correctly relocated when using sample weights. #13486 by Jérémie du Boisberranger.API Change The
n_components_
attribute incluster.AgglomerativeClustering
andcluster.FeatureAgglomeration
has been renamed ton_connected_components_
. #13427 by Stephane Couvreur.Enhancement
cluster.AgglomerativeClustering
andcluster.FeatureAgglomeration
now accept adistance_threshold
parameter which can be used to find the clusters instead ofn_clusters
. #9069 by Vathsala Achar and Adrin Jalali.
sklearn.compose
¶
API Change
compose.ColumnTransformer
is no longer an experimental feature. #13835 by Hanmin Qin.
sklearn.datasets
¶
Fix Added support for 64-bit group IDs and pointers in SVMLight files. #10727 by Bryan K Woods.
Fix
datasets.load_sample_images
returns images with a deterministic order. #13250 by Thomas Fan.
sklearn.decomposition
¶
Enhancement
decomposition.KernelPCA
now has deterministic output (resolved sign ambiguity in eigenvalue decomposition of the kernel matrix). #13241 by Aurélien Bellet.Fix Fixed a bug in
decomposition.KernelPCA
,fit().transform()
now produces the correct output (the same asfit_transform()
) in case of non-removed zero eigenvalues (remove_zero_eig=False
).fit_inverse_transform
was also accelerated by using the same trick asfit_transform
to compute the transform ofX
. #12143 by Sylvain MariéFix Fixed a bug in
decomposition.NMF
whereinit = 'nndsvd'
,init = 'nndsvda'
, andinit = 'nndsvdar'
are allowed whenn_components < n_features
instead ofn_components <= min(n_samples, n_features)
. #11650 by Hossein Pourbozorg and Zijie (ZJ) Poh.API Change The default value of the
init
argument indecomposition.non_negative_factorization
will change fromrandom
toNone
in version 0.23 to make it consistent withdecomposition.NMF
. A FutureWarning is raised when the default value is used. #12988 by Zijie (ZJ) Poh.
sklearn.discriminant_analysis
¶
Enhancement
discriminant_analysis.LinearDiscriminantAnalysis
now preservesfloat32
andfloat64
dtypes. #8769 and #11000 by Thibault SejourneFix A
ChangedBehaviourWarning
is now raised whendiscriminant_analysis.LinearDiscriminantAnalysis
is given as parametern_components > min(n_features, n_classes - 1)
, andn_components
is changed tomin(n_features, n_classes - 1)
if so. Previously the change was made, but silently. #11526 by William de Vazelhes.Fix Fixed a bug in
discriminant_analysis.LinearDiscriminantAnalysis
where the predicted probabilities would be incorrectly computed in the multiclass case. #6848, by Agamemnon Krasoulis andGuillaume Lemaitre <glemaitre>
.Fix Fixed a bug in
discriminant_analysis.LinearDiscriminantAnalysis
where the predicted probabilities would be incorrectly computed witheigen
solver. #11727, by Agamemnon Krasoulis.
sklearn.dummy
¶
Fix Fixed a bug in
dummy.DummyClassifier
where thepredict_proba
method was returning int32 array instead of float64 for thestratified
strategy. #13266 by Christos Aridas.Fix Fixed a bug in
dummy.DummyClassifier
where it was throwing a dimension mismatch error in prediction time if a column vectory
withshape=(n, 1)
was given atfit
time. #13545 by Nick Sorros and Adrin Jalali.
sklearn.ensemble
¶
Major Feature Add two new implementations of gradient boosting trees:
ensemble.HistGradientBoostingClassifier
andensemble.HistGradientBoostingRegressor
. The implementation of these estimators is inspired by LightGBM and can be orders of magnitude faster thanensemble.GradientBoostingRegressor
andensemble.GradientBoostingClassifier
when the number of samples is larger than tens of thousands of samples. The API of these new estimators is slightly different, and some of the features fromensemble.GradientBoostingClassifier
andensemble.GradientBoostingRegressor
are not yet supported.These new estimators are experimental, which means that their results or their API might change without any deprecation cycle. To use them, you need to explicitly import
enable_hist_gradient_boosting
:>>> # explicitly require this experimental feature >>> from sklearn.experimental import enable_hist_gradient_boosting # noqa >>> # now you can import normally from sklearn.ensemble >>> from sklearn.ensemble import HistGradientBoostingClassifier
Note
Update: since version 1.0, these estimators are not experimental anymore and you don’t need to use
from sklearn.experimental import enable_hist_gradient_boosting
.#12807 by Nicolas Hug.
Feature Add
ensemble.VotingRegressor
which provides an equivalent ofensemble.VotingClassifier
for regression problems. #12513 by Ramil Nugmanov and Mohamed Ali Jamaoui.Efficiency Make
ensemble.IsolationForest
prefer threads over processes when running withn_jobs > 1
as the underlying decision tree fit calls do release the GIL. This changes reduces memory usage and communication overhead. #12543 by Isaac Storch and Olivier Grisel.Efficiency Make
ensemble.IsolationForest
more memory efficient by avoiding keeping in memory each tree prediction. #13260 by Nicolas Goix.Efficiency
ensemble.IsolationForest
now uses chunks of data at prediction step, thus capping the memory usage. #13283 by Nicolas Goix.Efficiency
sklearn.ensemble.GradientBoostingClassifier
andsklearn.ensemble.GradientBoostingRegressor
now keep the inputy
asfloat64
to avoid it being copied internally by trees. #13524 by Adrin Jalali.Enhancement Minimized the validation of X in
ensemble.AdaBoostClassifier
andensemble.AdaBoostRegressor
#13174 by Christos Aridas.Enhancement
ensemble.IsolationForest
now exposeswarm_start
parameter, allowing iterative addition of trees to an isolation forest. #13496 by Peter Marko.Fix The values of
feature_importances_
in all random forest based models (i.e.ensemble.RandomForestClassifier
,ensemble.RandomForestRegressor
,ensemble.ExtraTreesClassifier
,ensemble.ExtraTreesRegressor
,ensemble.RandomTreesEmbedding
,ensemble.GradientBoostingClassifier
, andensemble.GradientBoostingRegressor
) now:sum up to
1
all the single node trees in feature importance calculation are ignored
in case all trees have only one single node (i.e. a root node), feature importances will be an array of all zeros.
#13636 and #13620 by Adrin Jalali.
Fix Fixed a bug in
ensemble.GradientBoostingClassifier
andensemble.GradientBoostingRegressor
, which didn’t support scikit-learn estimators as the initial estimator. Also added support of initial estimator which does not support sample weights. #12436 by Jérémie du Boisberranger and #12983 by Nicolas Hug.Fix Fixed the output of the average path length computed in
ensemble.IsolationForest
when the input is either 0, 1 or 2. #13251 by Albert Thomas and joshuakennethjones.Fix Fixed a bug in
ensemble.GradientBoostingClassifier
where the gradients would be incorrectly computed in multiclass classification problems. #12715 by Nicolas Hug.Fix Fixed a bug in
ensemble.GradientBoostingClassifier
where validation sets for early stopping were not sampled with stratification. #13164 by Nicolas Hug.Fix Fixed a bug in
ensemble.GradientBoostingClassifier
where the default initial prediction of a multiclass classifier would predict the classes priors instead of the log of the priors. #12983 by Nicolas Hug.Fix Fixed a bug in
ensemble.RandomForestClassifier
where thepredict
method would error for multiclass multioutput forests models if any targets were strings. #12834 by Elizabeth Sander.Fix Fixed a bug in
ensemble.gradient_boosting.LossFunction
andensemble.gradient_boosting.LeastSquaresError
where the default value oflearning_rate
inupdate_terminal_regions
is not consistent with the document and the caller functions. Note however that directly using these loss functions is deprecated. #6463 by movelikeriver.Fix
ensemble.partial_dependence
(and consequently the new versionsklearn.inspection.partial_dependence
) now takes sample weights into account for the partial dependence computation when the gradient boosting model has been trained with sample weights. #13193 by Samuel O. Ronsin.API Change
ensemble.partial_dependence
andensemble.plot_partial_dependence
are now deprecated in favor ofinspection.partial_dependence
andinspection.plot_partial_dependence<sklearn.inspection.plot_partial_dependence>
. #12599 by Trevor Stephens and Nicolas Hug.Fix
ensemble.VotingClassifier
andensemble.VotingRegressor
were failing duringfit
in one of the estimators was set toNone
andsample_weight
was notNone
. #13779 by Guillaume Lemaitre.API Change
ensemble.VotingClassifier
andensemble.VotingRegressor
accept'drop'
to disable an estimator in addition toNone
to be consistent with other estimators (i.e.,pipeline.FeatureUnion
andcompose.ColumnTransformer
). #13780 by Guillaume Lemaitre.
sklearn.externals
¶
API Change Deprecated
externals.six
since we have dropped support for Python 2.7. #12916 by Hanmin Qin.
sklearn.feature_extraction
¶
Fix If
input='file'
orinput='filename'
, and a callable is given as theanalyzer
,sklearn.feature_extraction.text.HashingVectorizer
,sklearn.feature_extraction.text.TfidfVectorizer
, andsklearn.feature_extraction.text.CountVectorizer
now read the data from the file(s) and then pass it to the givenanalyzer
, instead of passing the file name(s) or the file object(s) to the analyzer. #13641 by Adrin Jalali.
sklearn.impute
¶
Major Feature Added
impute.IterativeImputer
, which is a strategy for imputing missing values by modeling each feature with missing values as a function of other features in a round-robin fashion. #8478 and #12177 by Sergey Feldman and Ben Lawson.The API of IterativeImputer is experimental and subject to change without any deprecation cycle. To use them, you need to explicitly import
enable_iterative_imputer
:>>> from sklearn.experimental import enable_iterative_imputer # noqa >>> # now you can import normally from sklearn.impute >>> from sklearn.impute import IterativeImputer
Feature The
impute.SimpleImputer
andimpute.IterativeImputer
have a new parameter'add_indicator'
, which simply stacks aimpute.MissingIndicator
transform into the output of the imputer’s transform. That allows a predictive estimator to account for missingness. #12583, #13601 by Danylo Baibak.Fix In
impute.MissingIndicator
avoid implicit densification by raising an exception if input is sparse addmissing_values
property is set to 0. #13240 by Bartosz Telenczuk.Fix Fixed two bugs in
impute.MissingIndicator
. First, whenX
is sparse, all the non-zero non missing values used to become explicit False in the transformed data. Then, whenfeatures='missing-only'
, all features used to be kept if there were no missing values at all. #13562 by Jérémie du Boisberranger.
sklearn.inspection
¶
(new subpackage)
Feature Partial dependence plots (
inspection.plot_partial_dependence
) are now supported for any regressor or classifier (provided that they have apredict_proba
method). #12599 by Trevor Stephens and Nicolas Hug.
sklearn.isotonic
¶
Feature Allow different dtypes (such as float32) in
isotonic.IsotonicRegression
. #8769 by Vlad Niculae
sklearn.linear_model
¶
Enhancement
linear_model.Ridge
now preservesfloat32
andfloat64
dtypes. #8769 and #11000 by Guillaume Lemaitre, and Joan MassichFeature
linear_model.LogisticRegression
andlinear_model.LogisticRegressionCV
now support Elastic-Net penalty, with the ‘saga’ solver. #11646 by Nicolas Hug.Feature Added
linear_model.lars_path_gram
, which islinear_model.lars_path
in the sufficient stats mode, allowing users to computelinear_model.lars_path
without providingX
andy
. #11699 by Kuai Yu.Efficiency
linear_model.make_dataset
now preservesfloat32
andfloat64
dtypes, reducing memory consumption in stochastic gradient, SAG and SAGA solvers. #8769 and #11000 by Nelle Varoquaux, Arthur Imbert, Guillaume Lemaitre, and Joan MassichEnhancement
linear_model.LogisticRegression
now supports an unregularized objective whenpenalty='none'
is passed. This is equivalent to settingC=np.inf
with l2 regularization. Not supported by the liblinear solver. #12860 by Nicolas Hug.Enhancement
sparse_cg
solver inlinear_model.Ridge
now supports fitting the intercept (i.e.fit_intercept=True
) when inputs are sparse. #13336 by Bartosz Telenczuk.Enhancement The coordinate descent solver used in
Lasso
,ElasticNet
, etc. now issues aConvergenceWarning
when it completes without meeting the desired toleranbce. #11754 and #13397 by Brent Fagan and Adrin Jalali.Fix Fixed a bug in
linear_model.LogisticRegression
andlinear_model.LogisticRegressionCV
with ‘saga’ solver, where the weights would not be correctly updated in some cases. #11646 by Tom Dupre la Tour.Fix Fixed the posterior mean, posterior covariance and returned regularization parameters in
linear_model.BayesianRidge
. The posterior mean and the posterior covariance were not the ones computed with the last update of the regularization parameters and the returned regularization parameters were not the final ones. Also fixed the formula of the log marginal likelihood used to compute the score whencompute_score=True
. #12174 by Albert Thomas.Fix Fixed a bug in
linear_model.LassoLarsIC
, where user inputcopy_X=False
at instance creation would be overridden by default parameter valuecopy_X=True
infit
. #12972 by Lucio Fernandez-ArjonaFix Fixed a bug in
linear_model.LinearRegression
that was not returning the same coeffecients and intercepts withfit_intercept=True
in sparse and dense case. #13279 by Alexandre GramfortFix Fixed a bug in
linear_model.HuberRegressor
that was broken whenX
was of dtype bool. #13328 by Alexandre Gramfort.Fix Fixed a performance issue of
saga
andsag
solvers when called in ajoblib.Parallel
setting withn_jobs > 1
andbackend="threading"
, causing them to perform worse than in the sequential case. #13389 by Pierre Glaser.Fix Fixed a bug in
linear_model.stochastic_gradient.BaseSGDClassifier
that was not deterministic when trained in a multi-class setting on several threads. #13422 by Clément Doumouro.Fix Fixed bug in
linear_model.ridge_regression
,linear_model.Ridge
andlinear_model.RidgeClassifier
that caused unhandled exception for argumentsreturn_intercept=True
andsolver=auto
(default) or any other solver different fromsag
. #13363 by Bartosz TelenczukFix
linear_model.ridge_regression
will now raise an exception ifreturn_intercept=True
and solver is different fromsag
. Previously, only warning was issued. #13363 by Bartosz TelenczukFix
linear_model.ridge_regression
will choosesparse_cg
solver for sparse inputs whensolver=auto
andsample_weight
is provided (previouslycholesky
solver was selected). #13363 by Bartosz TelenczukAPI Change The use of
linear_model.lars_path
withX=None
while passingGram
is deprecated in version 0.21 and will be removed in version 0.23. Uselinear_model.lars_path_gram
instead. #11699 by Kuai Yu.API Change
linear_model.logistic_regression_path
is deprecated in version 0.21 and will be removed in version 0.23. #12821 by Nicolas Hug.Fix
linear_model.RidgeCV
with leave-one-out cross-validation now correctly fits an intercept whenfit_intercept=True
and the design matrix is sparse. #13350 by Jérôme Dockès
sklearn.manifold
¶
Efficiency Make
manifold.trustworthiness
use an inverted index instead of annp.where
lookup to find the rank of neighbors in the input space. This improves efficiency in particular when computed with lots of neighbors and/or small datasets. #9907 by William de Vazelhes.
sklearn.metrics
¶
Feature Added the
metrics.max_error
metric and a corresponding'max_error'
scorer for single output regression. #12232 by Krishna Sangeeth.Feature Add
metrics.multilabel_confusion_matrix
, which calculates a confusion matrix with true positive, false positive, false negative and true negative counts for each class. This facilitates the calculation of set-wise metrics such as recall, specificity, fall out and miss rate. #11179 by Shangwu Yao and Joel Nothman.Feature
metrics.jaccard_score
has been added to calculate the Jaccard coefficient as an evaluation metric for binary, multilabel and multiclass tasks, with an interface analogous tometrics.f1_score
. #13151 by Gaurav Dhingra and Joel Nothman.Feature Added
metrics.pairwise.haversine_distances
which can be accessed withmetric='pairwise'
throughmetrics.pairwise_distances
and estimators. (Haversine distance was previously available for nearest neighbors calculation.) #12568 by Wei Xue, Emmanuel Arias and Joel Nothman.Efficiency Faster
metrics.pairwise_distances
withn_jobs
> 1 by using a thread-based backend, instead of process-based backends. #8216 by Pierre Glaser and Romuald MenuetEfficiency The pairwise manhattan distances with sparse input now uses the BLAS shipped with scipy instead of the bundled BLAS. #12732 by Jérémie du Boisberranger
Enhancement Use label
accuracy
instead ofmicro-average
onmetrics.classification_report
to avoid confusion.micro-average
is only shown for multi-label or multi-class with a subset of classes because it is otherwise identical to accuracy. #12334 by Emmanuel Arias, Joel Nothman and Andreas MüllerEnhancement Added
beta
parameter tometrics.homogeneity_completeness_v_measure
andmetrics.v_measure_score
to configure the tradeoff between homogeneity and completeness. #13607 by Stephane Couvreur and and Ivan Sanchez.Fix The metric
metrics.r2_score
is degenerate with a single sample and now it returns NaN and raisesexceptions.UndefinedMetricWarning
. #12855 by Pawel Sendyk.Fix Fixed a bug where
metrics.brier_score_loss
will sometimes return incorrect result when there’s only one class iny_true
. #13628 by Hanmin Qin.Fix Fixed a bug in
metrics.label_ranking_average_precision_score
where sample_weight wasn’t taken into account for samples with degenerate labels. #13447 by Dan Ellis.API Change The parameter
labels
inmetrics.hamming_loss
is deprecated in version 0.21 and will be removed in version 0.23. #10580 by Reshama Shaikh and Sandra Mitrovic.Fix The function
metrics.pairwise.euclidean_distances
, and therefore several estimators withmetric='euclidean'
, suffered from numerical precision issues withfloat32
features. Precision has been increased at the cost of a small drop of performance. #13554 by @Celelibi and Jérémie du Boisberranger.API Change
metrics.jaccard_similarity_score
is deprecated in favour of the more consistentmetrics.jaccard_score
. The former behavior for binary and multiclass targets is broken. #13151 by Joel Nothman.
sklearn.mixture
¶
Fix Fixed a bug in
mixture.BaseMixture
and therefore on estimators based on it, i.e.mixture.GaussianMixture
andmixture.BayesianGaussianMixture
, wherefit_predict
andfit.predict
were not equivalent. #13142 by Jérémie du Boisberranger.
sklearn.model_selection
¶
Feature Classes
GridSearchCV
andRandomizedSearchCV
now allow for refit=callable to add flexibility in identifying the best estimator. See Balance model complexity and cross-validated score. #11354 by Wenhao Zhang, Joel Nothman and Adrin Jalali.Enhancement Classes
GridSearchCV
,RandomizedSearchCV
, and methodscross_val_score
,cross_val_predict
,cross_validate
, now print train scores whenreturn_train_scores
is True andverbose
> 2. Forlearning_curve
, andvalidation_curve
only the latter is required. #12613 and #12669 by Marc Torrellas.Enhancement Some CV splitter classes and
model_selection.train_test_split
now raiseValueError
when the resulting training set is empty. #12861 by Nicolas Hug.Fix Fixed a bug where
model_selection.StratifiedKFold
shuffles each class’s samples with the samerandom_state
, makingshuffle=True
ineffective. #13124 by Hanmin Qin.Fix Added ability for
model_selection.cross_val_predict
to handle multi-label (and multioutput-multiclass) targets withpredict_proba
-type methods. #8773 by Stephen Hoover.Fix Fixed an issue in
cross_val_predict
wheremethod="predict_proba"
returned always0.0
when one of the classes was excluded in a cross-validation fold. #13366 by Guillaume Fournier
sklearn.multiclass
¶
Fix Fixed an issue in
multiclass.OneVsOneClassifier.decision_function
where the decision_function value of a given sample was different depending on whether the decision_function was evaluated on the sample alone or on a batch containing this same sample due to the scaling used in decision_function. #10440 by Jonathan Ohayon.
sklearn.multioutput
¶
Fix Fixed a bug in
multioutput.MultiOutputClassifier
where thepredict_proba
method incorrectly checked forpredict_proba
attribute in the estimator object. #12222 by Rebekah Kim
sklearn.neighbors
¶
Major Feature Added
neighbors.NeighborhoodComponentsAnalysis
for metric learning, which implements the Neighborhood Components Analysis algorithm. #10058 by William de Vazelhes and John Chiotellis.API Change Methods in
neighbors.NearestNeighbors
:kneighbors
,radius_neighbors
,kneighbors_graph
,radius_neighbors_graph
now raiseNotFittedError
, rather thanAttributeError
, when called beforefit
#12279 by Krishna Sangeeth.
sklearn.neural_network
¶
Fix Fixed a bug in
neural_network.MLPClassifier
andneural_network.MLPRegressor
where the optionshuffle=False
was being ignored. #12582 by Sam Waterbury.Fix Fixed a bug in
neural_network.MLPClassifier
where validation sets for early stopping were not sampled with stratification. In the multilabel case however, splits are still not stratified. #13164 by Nicolas Hug.
sklearn.pipeline
¶
Feature
pipeline.Pipeline
can now use indexing notation (e.g.my_pipeline[0:-1]
) to extract a subsequence of steps as another Pipeline instance. A Pipeline can also be indexed directly to extract a particular step (e.g.my_pipeline['svc']
), rather than accessingnamed_steps
. #2568 by Joel Nothman.Feature Added optional parameter
verbose
inpipeline.Pipeline
,compose.ColumnTransformer
andpipeline.FeatureUnion
and correspondingmake_
helpers for showing progress and timing of each step. #11364 by Baze Petrushev, Karan Desai, Joel Nothman, and Thomas Fan.Enhancement
pipeline.Pipeline
now supports using'passthrough'
as a transformer, with the same effect asNone
. #11144 by Thomas Fan.Enhancement
pipeline.Pipeline
implements__len__
and thereforelen(pipeline)
returns the number of steps in the pipeline. #13439 by Lakshya KD.
sklearn.preprocessing
¶
Feature
preprocessing.OneHotEncoder
now supports dropping one feature per category with a new drop parameter. #12908 by Drew Johnston.Efficiency
preprocessing.OneHotEncoder
andpreprocessing.OrdinalEncoder
now handle pandas DataFrames more efficiently. #13253 by @maikia.Efficiency Make
preprocessing.MultiLabelBinarizer
cache class mappings instead of calculating it every time on the fly. #12116 by Ekaterina Krivich and Joel Nothman.Efficiency
preprocessing.PolynomialFeatures
now supports compressed sparse row (CSR) matrices as input for degrees 2 and 3. This is typically much faster than the dense case as it scales with matrix density and expansion degree (on the order of density^degree), and is much, much faster than the compressed sparse column (CSC) case. #12197 by Andrew Nystrom.Efficiency Speed improvement in
preprocessing.PolynomialFeatures
, in the dense case. Also added a new parameterorder
which controls output order for further speed performances. #12251 by Tom Dupre la Tour.Fix Fixed the calculation overflow when using a float16 dtype with
preprocessing.StandardScaler
. #13007 by Raffaello BaluyotFix Fixed a bug in
preprocessing.QuantileTransformer
andpreprocessing.quantile_transform
to force n_quantiles to be at most equal to n_samples. Values of n_quantiles larger than n_samples were either useless or resulting in a wrong approximation of the cumulative distribution function estimator. #13333 by Albert Thomas.API Change The default value of
copy
inpreprocessing.quantile_transform
will change from False to True in 0.23 in order to make it more consistent with the defaultcopy
values of other functions insklearn.preprocessing
and prevent unexpected side effects by modifying the value ofX
inplace. #13459 by Hunter McGushion.
sklearn.svm
¶
Fix Fixed an issue in
svm.SVC.decision_function
whendecision_function_shape='ovr'
. The decision_function value of a given sample was different depending on whether the decision_function was evaluated on the sample alone or on a batch containing this same sample due to the scaling used in decision_function. #10440 by Jonathan Ohayon.
sklearn.tree
¶
Feature Decision Trees can now be plotted with matplotlib using
tree.plot_tree
without relying on thedot
library, removing a hard-to-install dependency. #8508 by Andreas Müller.Feature Decision Trees can now be exported in a human readable textual format using
tree.export_text
. #6261 byGiuseppe Vettigli <JustGlowing>
.Feature
get_n_leaves()
andget_depth()
have been added totree.BaseDecisionTree
and consequently all estimators based on it, includingtree.DecisionTreeClassifier
,tree.DecisionTreeRegressor
,tree.ExtraTreeClassifier
, andtree.ExtraTreeRegressor
. #12300 by Adrin Jalali.Fix Trees and forests did not previously
predict
multi-output classification targets with string labels, despite accepting them infit
. #11458 by Mitar Milutinovic.Fix Fixed an issue with
tree.BaseDecisionTree
and consequently all estimators based on it, includingtree.DecisionTreeClassifier
,tree.DecisionTreeRegressor
,tree.ExtraTreeClassifier
, andtree.ExtraTreeRegressor
, where they used to exceed the givenmax_depth
by 1 while expanding the tree ifmax_leaf_nodes
andmax_depth
were both specified by the user. Please note that this also affects all ensemble methods using decision trees. #12344 by Adrin Jalali.
sklearn.utils
¶
Feature
utils.resample
now accepts astratify
parameter for sampling according to class distributions. #13549 by Nicolas Hug.API Change Deprecated
warn_on_dtype
parameter fromutils.check_array
andutils.check_X_y
. Added explicit warning for dtype conversion incheck_pairwise_arrays
if themetric
being passed is a pairwise boolean metric. #13382 by Prathmesh Savale.
Multiple modules¶
Major Feature The
__repr__()
method of all estimators (used when callingprint(estimator)
) has been entirely re-written, building on Python’s pretty printing standard library. All parameters are printed by default, but this can be altered with theprint_changed_only
option insklearn.set_config
. #11705 by Nicolas Hug.Major Feature Add estimators tags: these are annotations of estimators that allow programmatic inspection of their capabilities, such as sparse matrix support, supported output types and supported methods. Estimator tags also determine the tests that are run on an estimator when
check_estimator
is called. Read more in the User Guide. #8022 by Andreas Müller.Efficiency Memory copies are avoided when casting arrays to a different dtype in multiple estimators. #11973 by Roman Yurchak.
Fix Fixed a bug in the implementation of the
our_rand_r
helper function that was not behaving consistently across platforms. #13422 by Madhura Parikh and Clément Doumouro.
Miscellaneous¶
Enhancement Joblib is no longer vendored in scikit-learn, and becomes a dependency. Minimal supported version is joblib 0.11, however using version >= 0.13 is strongly recommended. #13531 by Roman Yurchak.
Changes to estimator checks¶
These changes mostly affect library developers.
Add
check_fit_idempotent
tocheck_estimator
, which checks that whenfit
is called twice with the same data, the output ofpredict
,predict_proba
,transform
, anddecision_function
does not change. #12328 by Nicolas HugMany checks can now be disabled or configured with Estimator Tags. #8022 by Andreas Müller.
Code and Documentation Contributors¶
Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0.20, including:
adanhawth, Aditya Vyas, Adrin Jalali, Agamemnon Krasoulis, Albert Thomas, Alberto Torres, Alexandre Gramfort, amourav, Andrea Navarrete, Andreas Mueller, Andrew Nystrom, assiaben, Aurélien Bellet, Bartosz Michałowski, Bartosz Telenczuk, bauks, BenjaStudio, bertrandhaut, Bharat Raghunathan, brentfagan, Bryan Woods, Cat Chenal, Cheuk Ting Ho, Chris Choe, Christos Aridas, Clément Doumouro, Cole Smith, Connossor, Corey Levinson, Dan Ellis, Dan Stine, Danylo Baibak, daten-kieker, Denis Kataev, Didi Bar-Zev, Dillon Gardner, Dmitry Mottl, Dmitry Vukolov, Dougal J. Sutherland, Dowon, drewmjohnston, Dror Atariah, Edward J Brown, Ekaterina Krivich, Elizabeth Sander, Emmanuel Arias, Eric Chang, Eric Larson, Erich Schubert, esvhd, Falak, Feda Curic, Federico Caselli, Frank Hoang, Fibinse Xavier`, Finn O’Shea, Gabriel Marzinotto, Gabriel Vacaliuc, Gabriele Calvo, Gael Varoquaux, GauravAhlawat, Giuseppe Vettigli, Greg Gandenberger, Guillaume Fournier, Guillaume Lemaitre, Gustavo De Mari Pereira, Hanmin Qin, haroldfox, hhu-luqi, Hunter McGushion, Ian Sanders, JackLangerman, Jacopo Notarstefano, jakirkham, James Bourbeau, Jan Koch, Jan S, janvanrijn, Jarrod Millman, jdethurens, jeremiedbb, JF, joaak, Joan Massich, Joel Nothman, Jonathan Ohayon, Joris Van den Bossche, josephsalmon, Jérémie Méhault, Katrin Leinweber, ken, kms15, Koen, Kossori Aruku, Krishna Sangeeth, Kuai Yu, Kulbear, Kushal Chauhan, Kyle Jackson, Lakshya KD, Leandro Hermida, Lee Yi Jie Joel, Lily Xiong, Lisa Sarah Thomas, Loic Esteve, louib, luk-f-a, maikia, mail-liam, Manimaran, Manuel López-Ibáñez, Marc Torrellas, Marco Gaido, Marco Gorelli, MarcoGorelli, marineLM, Mark Hannel, Martin Gubri, Masstran, mathurinm, Matthew Roeschke, Max Copeland, melsyt, mferrari3, Mickaël Schoentgen, Ming Li, Mitar, Mohammad Aftab, Mohammed AbdelAal, Mohammed Ibraheem, Muhammad Hassaan Rafique, mwestt, Naoya Iijima, Nicholas Smith, Nicolas Goix, Nicolas Hug, Nikolay Shebanov, Oleksandr Pavlyk, Oliver Rausch, Olivier Grisel, Orestis, Osman, Owen Flanagan, Paul Paczuski, Pavel Soriano, pavlos kallis, Pawel Sendyk, peay, Peter, Peter Cock, Peter Hausamann, Peter Marko, Pierre Glaser, pierretallotte, Pim de Haan, Piotr Szymański, Prabakaran Kumaresshan, Pradeep Reddy Raamana, Prathmesh Savale, Pulkit Maloo, Quentin Batista, Radostin Stoyanov, Raf Baluyot, Rajdeep Dua, Ramil Nugmanov, Raúl García Calvo, Rebekah Kim, Reshama Shaikh, Rohan Lekhwani, Rohan Singh, Rohan Varma, Rohit Kapoor, Roman Feldbauer, Roman Yurchak, Romuald M, Roopam Sharma, Ryan, Rüdiger Busche, Sam Waterbury, Samuel O. Ronsin, SandroCasagrande, Scott Cole, Scott Lowe, Sebastian Raschka, Shangwu Yao, Shivam Kotwalia, Shiyu Duan, smarie, Sriharsha Hatwar, Stephen Hoover, Stephen Tierney, Stéphane Couvreur, surgan12, SylvainLan, TakingItCasual, Tashay Green, thibsej, Thomas Fan, Thomas J Fan, Thomas Moreau, Tom Dupré la Tour, Tommy, Tulio Casagrande, Umar Farouk Umar, Utkarsh Upadhyay, Vinayak Mehta, Vishaal Kapoor, Vivek Kumar, Vlad Niculae, vqean3, Wenhao Zhang, William de Vazelhes, xhan, Xing Han Lu, xinyuliu12, Yaroslav Halchenko, Zach Griffith, Zach Miller, Zayd Hammoudeh, Zhuyi Xue, Zijie (ZJ) Poh, ^__^