March 3 2020
The 0.22.2.post1 release includes a packaging fix for the source distribution but the content of the packages is otherwise identical to the content of the wheels with the 0.22.2 version (without the .post1 suffix). Both contain the following changes.
Fix Fixed a bug in
metrics.plot_roc_curvewhere the name of the estimator was passed in the
metrics.RocCurveDisplayinstead of the parameter
name. It results in a different plot when calling
metrics.RocCurveDisplay.plotfor the subsequent times. #16500 by Guillaume Lemaitre.
Fix Fixed a bug in
metrics.plot_precision_recall_curvewhere the name of the estimator was passed in the
metrics.PrecisionRecallDisplayinstead of the parameter
name. It results in a different plot when calling
metrics.PrecisionRecallDisplay.plotfor the subsequent times. #16505 by Guillaume Lemaitre.
January 2 2020
This is a bug-fix release to primarily resolve some packaging issues in version 0.22.0. It also includes minor documentation improvements and some bug fixes.
inspection.permutation_importancewill return the same
random_stateis given for both
n_jobs>1both with shared memory backends (thread-safety) and isolated memory, process-based backends. Also avoid casting the data as object dtype and avoid read-only error on large dataframes with
n_jobs>1as reported in #15810. Follow-up of #15898 by Shivam Gargsya. #15933 by Guillaume Lemaitre and Olivier Grisel.
utils.check_is_fittedaccepts back an explicit
attributesargument to check for specific attributes as explicit markers of a fitted estimator. When no explicit
attributesare provided, only the attributes that end with a underscore and do not start with double underscore are used as “fitted” markers. The
all_or_anyargument is also no longer deprecated. This change is made to restore some backward compatibility with the behavior of this utility in version 0.21. #15947 by Thomas Fan.
December 3 2019
For a short description of the main highlights of the release, please refer to Release Highlights for scikit-learn 0.22.
Legend for changelogs¶
Major Feature : something big that you couldn’t do before.
Feature : something that you couldn’t do before.
Efficiency : an existing feature now may not require as much computation or memory.
Enhancement : a miscellaneous minor improvement.
Fix : something that previously didn’t work as documentated – or according to reasonable expectations – should now work.
API Change : you will need to change your code to have the same effect in the future; or a feature will be removed in the future.
Clear definition of the public API¶
Scikit-learn has a public API, and a private API.
We do our best not to break the public API, and to only introduce backward-compatible changes that do not require any user action. However, in cases where that’s not possible, any change to the public API is subject to a deprecation cycle of two minor versions. The private API isn’t publicly documented and isn’t subject to any deprecation cycle, so users should not rely on its stability.
A function or object is public if it is documented in the API Reference and if it can be
imported with an import path without leading underscores. For example
sklearn.pipeline.make_pipeline is public, while
sklearn.pipeline._name_estimators is private.
sklearn.ensemble._gb.BaseEnsemble is private too because the whole
module is private.
Up to 0.22, some tools were de-facto public (no leading underscore), while
they should have been private in the first place. In version 0.22, these
tools have been made properly private, and the public API space has been
cleaned. In addition, importing from most sub-modules is now deprecated: you
should for example use
from sklearn.cluster import Birch instead of
from sklearn.cluster.birch import Birch (in practice,
been moved to
All the tools in the public API should be documented in the API Reference. If you find a public tool (without leading underscore) that isn’t in the API reference, that means it should either be private or documented. Please let us know by opening an issue!
FutureWarning from now on¶
When deprecating a feature, previous versions of scikit-learn used to raise
DeprecationWarning. Since the
DeprecationWarnings aren’t shown by
default by Python, scikit-learn needed to resort to a custom warning filter
to always show the warnings. That filter would sometimes interfere
with users custom warning filters.
Starting from version 0.22, scikit-learn will show
deprecations, as recommended by the Python documentation.
FutureWarnings are always shown by default by Python, so the custom
filter has been removed and scikit-learn no longer hinders with user
filters. #15080 by Nicolas Hug.
The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures.
normalize_componentshas no effect due to deprecation.
Xhas features with no missing values. Feature
Xis sparse. Fix
model_selection.StratifiedKFoldand any use of
cv=intwith a classifier. Fix
cross_decomposition.CCAwhen using scipy >= 1.3 Fix
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we cannot assure that this list is complete.)
MeanShiftnow accepts a max_iter with a default value of 300 instead of always using the default 300. It also now exposes an
n_iter_indicating the maximum number of iterations performed on each seed. #15120 by Adrin Jalali.
compose.ColumnTransformernow requires the number of features to be consistent between
FutureWarningis raised now, and this will raise an error in 0.24. If the number of features isn’t consistent and negative indexing is used, an error is raised. #14544 by Adrin Jalali.
decomposition.KernelPCAnow properly checks the eigenvalues found by the solver for numerical or conditioning issues. This ensures consistency of results across solvers (different choices for
eigen_solver), including approximate solvers such as
'lobpcg'(see #12068). #12145 by Sylvain Marié
Fix Fixed a bug where
cross_decomposition.PLSRegressionwere raising an error when fitted with a target matrix
Yin which the first column was constant. #13609 by Camila Williamson.
decomposition.MiniBatchDictionaryLearningnow take a
transform_max_iterparameter and pass it to either
decomposition.sparse_encode. #12650 by Adrin Jalali.
decomposition.IncrementalPCAnow accepts sparse matrices as input, converting them to dense in batches thereby avoiding the need to store the entire dense matrix at once. #13960 by Scott Gigante.
API Change The default value of the
dummy.DummyClassifierwill change from
'stratified'in version 0.22 to
'prior'in 0.24. A FutureWarning is raised when the default value is used. #15382 by Thomas Fan.
Major Feature Added
ensemble.StackingRegressorto stack predictors using a final classifier or regressor. #11047 by Guillaume Lemaitre and Caio Oliveira and #15138 by Jon Cusick..
Feature Estimators now natively support dense data with missing values both for training and predicting. They also support infinite values. #13911 and #14406 by Nicolas Hug, Adrin Jalali and Olivier Grisel.
ensemble.HistGradientBoostingClassifierthe training loss or score is now monitored on a class-wise stratified subsample to preserve the class balance of the original training set. #14194 by Johann Faouzi.
Note that pickles from 0.21 will not work in 0.22.
Enhancement Addition of
max_samplesargument allows limiting size of bootstrap samples to be less than size of dataset. Added to
ensemble.ExtraTreesRegressor. #14682 by Matt Hancock and #5963 by Pablo Duboue.
ensemble.VotingRegressornow correctly maps to dropped estimators. Previously, the
named_estimators_mapping was incorrect whenever one of the estimators was dropped. #15375 by Thomas Fan.
Fix Run by default
ensemble.VotingRegressor. It leads to solve issues regarding shape consistency during
predictwhich was failing when the underlying estimators were not outputting consistent array dimensions. Note that it should be replaced by refactoring the common tests in the future. #14305 by Guillaume Lemaitre.
Fix Stacking and Voting estimators now ensure that their underlying estimators are either all classifiers or all regressors.
VotingRegressornow raise consistent error messages. #15084 by Guillaume Lemaitre.
presortis now deprecated in
ensemble.GradientBoostingRegressor, and the parameter has no effect. Users are recommended to use
ensemble.HistGradientBoostingRegressorinstead. #14907 by Adrin Jalali.
Enhancement A warning will now be raised if a parameter choice means that another parameter will be unused on calling the fit() method for
feature_extraction.text.TfidfVectorizer. #14602 by Gaurav Chawla.
Fix Fixed a bug that caused
feature_extraction.DictVectorizerto raise an
transformoperation when producing a
scipy.sparsematrix on large input data. #15463 by Norvan Sahiner.
Enhancement Updated the following
feature_selectionestimators to allow NaN/Inf values in
feature_selection.VarianceThreshold. Note that if the underlying estimator of the feature selector does not allow NaN/Inf then it will still error, but the feature selectors themselves no longer enforce this restriction unnecessarily. #11635 by Alec Peters.
Fix Fixed a bug where
threshold=0did not remove constant features due to numerical instability, by using range rather than variance in this case. #13704 by Roddy MacSween.
Feature Gaussian process models on structured data:
gaussian_process.GaussianProcessClassifiercan now accept a list of generic objects (e.g. strings, trees, graphs, etc.) as the
Xargument to their training/prediction methods. A user-defined kernel should be provided for computing the kernel matrix among the generic objects, and should inherit from
gaussian_process.kernels.GenericKernelMixinto notify the GPR/GPC model that it handles non-vectorial samples. #15557 by Yu-Hang Tang.
gaussian_process.GaussianProcessRegressor.log_marginal_likelihoodnow accept a
clone_kernel=Truekeyword argument. When set to
False, the kernel attribute is modified, but may result in a performance improvement. #14378 by Masashi Shibata.
API Change From version 0.24
gaussian_process.kernels.Kernel.get_paramswill raise an
AttributeErrorrather than return
Nonefor parameters that are in the estimator’s constructor but not stored as attributes on the instance. #14464 by Joel Nothman.
skip_computeflag that is False by default, which, when True, will skip computation on features that have no missing values during the fit phase. #13773 by Sergey Feldman.
inspection.plot_partial_dependencenow support the fast ‘recursion’ method for
ensemble.HistGradientBoostingRegressor. #13769 by Nicolas Hug.
inspection.partial_dependenceaccepts pandas DataFrame and
compose.ColumnTransformer. In addition
inspection.plot_partial_dependencewill use the column names by default when a dataframe is passed. #14028 and #15429 by Guillaume Lemaitre.
linear_model.BayesianRidgenow accepts hyperparameters
lambda_initwhich can be used to set the initial value of the maximization procedure in fit. #13618 by Yoshihiro Uchida.
linear_model.Ridgenow correctly fits an intercept when
fit_intercept=True, because the default solver in this configuration has changed to
sparse_cg, which can fit an intercept with sparse data. #13995 by Jérôme Dockès.
Feature Exposed the
manifold.TSNEfor multi-core calculation of the neighbors graph. This parameter has no impact when
method="exact"). #15082 by Roman Yurchak.
Fix Fixed a bug where
cluster.SpectralClustering) computed wrong eigenvalues with
n_samples < 5 * n_components. #14647 by Andreas Müller.
Fix Fixed a bug in
eigen_solver="amg"would sometimes result in a LinAlgError. #13393 by Andrew Knyazev #13707 by Scott White
Feature Added a new parameter
zero_divisionto multiple classification metrics:
classification_report. This allows to set returned value for ill-defined metrics. #14900 by Marc Torrellas Socastro.
Feature Added multiclass support to
metrics.roc_auc_scorewith corresponding scorers
'roc_auc_ovo_weighted'. #12789 and #15274 by Kathy Chen, Mohamed Maskani, and Thomas Fan.
metrics.mean_tweedie_deviancemeasuring the Tweedie deviance for a given
powerparameter. Also add mean Poisson deviance
metrics.mean_poisson_devianceand mean Gamma deviance
metrics.mean_gamma_deviancethat are special cases of the Tweedie deviance for
power=2respectively. #13938 by Christian Lorentzen and Roman Yurchak.
model_selection.learning_curvenow accepts parameter
return_timeswhich can be used to retrieve computation times in order to plot model scalability (see learning_curve example). #13938 by Hadrien Reboul.
model_selection.RandomizedSearchCVnow only contains unfitted estimators. This potentially saves a lot of memory since the state of the estimators isn’t stored. ##15096 by Andreas Müller.
Major Feature Added
neighbors.RadiusNeighborsTransformer, which transform input dataset into a sparse neighbors graph. They give finer control on nearest neighbors computations and enable easy pipeline caching for multiple use. #10482 by Tom Dupre la Tour.
neighbors.LocalOutlierFactornow accept precomputed sparse neighbors graph as input. #10482 by Tom Dupre la Tour and Kumar Ashutosh.
neighbors.RadiusNeighborsClassifiernow supports predicting probabilities by using
predict_probaand supports more outlier_label options: ‘most_frequent’, or different outlier_labels for multi-outputs. #9597 by Wenbo Zhao.
Enhancement Avoid unnecessary data copy when fitting preprocessors
preprocessing.QuantileTransformerwhich results in a slight performance improvement. #13987 by Roman Yurchak.
model_selection.RandomizedSearchCVnow supports the _pairwise property, which prevents an error during cross-validation for estimators with pairwise inputs (such as
neighbors.KNeighborsClassifierwhen metric is set to ‘precomputed’). #13925 by Isaac S. Robson and #15524 by Xun Tang.
svm.NuSVCnow accept a
break_tiesparameter. This parameter results in predict breaking the ties according to the confidence values of decision_function, if
decision_function_shape='ovr', and the number of target classes > 2. #12557 by Adrin Jalali.
svm.OneClassSVMwhen received values negative or zero for parameter
sample_weightin method fit(), generated an invalid model. This behavior occurred only in some border scenarios. Now in these cases, fit() will fail with an Exception. #14286 by Alex Shacked.
Feature Adds minimal cost complexity pruning, controlled by
ensemble.GradientBoostingRegressor. #12887 by Thomas Fan.
check_estimatorcan now generate checks by setting
generate_only=True. Previously, running
check_estimatorwill stop when the first check fails. With
generate_only=True, all checks can run independently and report the ones that are failing. Read more in Rolling your own estimator. #14381 by Thomas Fan.
Feature A new random variable,
utils.fixes.loguniformimplements a log-uniform random variable (e.g., for use in RandomizedSearchCV). For example, the outcomes
100are all equally likely for
loguniform(1, 100). See #11232 by Scott Sievert and Nathaniel Saul, and
SciPy PR 10815 <https://github.com/scipy/scipy/pull/10815>.
utils.safe_indexing(now deprecated) accepts an
axisparameter to index array-like across rows and columns. The column indexing can be done on NumPy array, SciPy sparse matrix, and Pandas DataFrame. An additional refactoring was done. #14035 and #14475 by Guillaume Lemaitre.
API Change The following utils have been deprecated and are now private:
all_estimatorswhich is now in
API Change Scikit-learn now converts any input data structure implementing a duck array to a numpy array (using
__array__) to ensure consistent behavior instead of relying on
__array_function__(see NEP 18). #14702 by Andreas Müller.
Changes to estimator checks¶
These changes mostly affect library developers.
requires_positive_Xestimator tag (for models that require X to be non-negative) is now used by
utils.estimator_checks.check_estimatorto make sure a proper error message is raised if X contains some negative entries. #14680 by Alex Gramfort.
check_transformer_data_not_an_arrayto checks where missing
Code and Documentation Contributors¶
Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0.20, including:
Aaron Alphonsus, Abbie Popa, Abdur-Rahmaan Janhangeer, abenbihi, Abhinav Sagar, Abhishek Jana, Abraham K. Lagat, Adam J. Stewart, Aditya Vyas, Adrin Jalali, Agamemnon Krasoulis, Alec Peters, Alessandro Surace, Alexandre de Siqueira, Alexandre Gramfort, alexgoryainov, Alex Henrie, Alex Itkes, alexshacked, Allen Akinkunle, Anaël Beaugnon, Anders Kaseorg, Andrea Maldonado, Andrea Navarrete, Andreas Mueller, Andreas Schuderer, Andrew Nystrom, Angela Ambroz, Anisha Keshavan, Ankit Jha, Antonio Gutierrez, Anuja Kelkar, Archana Alva, arnaudstiegler, arpanchowdhry, ashimb9, Ayomide Bamidele, Baran Buluttekin, barrycg, Bharat Raghunathan, Bill Mill, Biswadip Mandal, blackd0t, Brian G. Barkley, Brian Wignall, Bryan Yang, c56pony, camilaagw, cartman_nabana, catajara, Cat Chenal, Cathy, cgsavard, Charles Vesteghem, Chiara Marmo, Chris Gregory, Christian Lorentzen, Christos Aridas, Dakota Grusak, Daniel Grady, Daniel Perry, Danna Naser, DatenBergwerk, David Dormagen, deeplook, Dillon Niederhut, Dong-hee Na, Dougal J. Sutherland, DrGFreeman, Dylan Cashman, edvardlindelof, Eric Larson, Eric Ndirangu, Eunseop Jeong, Fanny, federicopisanu, Felix Divo, flaviomorelli, FranciDona, Franco M. Luque, Frank Hoang, Frederic Haase, g0g0gadget, Gabriel Altay, Gabriel do Vale Rios, Gael Varoquaux, ganevgv, gdex1, getgaurav2, Gideon Sonoiya, Gordon Chen, gpapadok, Greg Mogavero, Grzegorz Szpak, Guillaume Lemaitre, Guillem García Subies, H4dr1en, hadshirt, Hailey Nguyen, Hanmin Qin, Hannah Bruce Macdonald, Harsh Mahajan, Harsh Soni, Honglu Zhang, Hossein Pourbozorg, Ian Sanders, Ingrid Spielman, J-A16, jaehong park, Jaime Ferrando Huertas, James Hill, James Myatt, Jay, jeremiedbb, Jérémie du Boisberranger, jeromedockes, Jesper Dramsch, Joan Massich, Joanna Zhang, Joel Nothman, Johann Faouzi, Jonathan Rahn, Jon Cusick, Jose Ortiz, Kanika Sabharwal, Katarina Slama, kellycarmody, Kennedy Kang’ethe, Kensuke Arai, Kesshi Jordan, Kevad, Kevin Loftis, Kevin Winata, Kevin Yu-Sheng Li, Kirill Dolmatov, Kirthi Shankar Sivamani, krishna katyal, Lakshmi Krishnan, Lakshya KD, LalliAcqua, lbfin, Leland McInnes, Léonard Binet, Loic Esteve, loopyme, lostcoaster, Louis Huynh, lrjball, Luca Ionescu, Lutz Roeder, MaggieChege, Maithreyi Venkatesh, Maltimore, Maocx, Marc Torrellas, Marie Douriez, Markus, Markus Frey, Martina G. Vilas, Martin Oywa, Martin Thoma, Masashi SHIBATA, Maxwell Aladago, mbillingr, m-clare, Meghann Agarwal, m.fab, Micah Smith, miguelbarao, Miguel Cabrera, Mina Naghshhnejad, Ming Li, motmoti, mschaffenroth, mthorrell, Natasha Borders, nezar-a, Nicolas Hug, Nidhin Pattaniyil, Nikita Titov, Nishan Singh Mann, Nitya Mandyam, norvan, notmatthancock, novaya, nxorable, Oleg Stikhin, Oleksandr Pavlyk, Olivier Grisel, Omar Saleem, Owen Flanagan, panpiort8, Paolo, Paolo Toccaceli, Paresh Mathur, Paula, Peng Yu, Peter Marko, pierretallotte, poorna-kumar, pspachtholz, qdeffense, Rajat Garg, Raphaël Bournhonesque, Ray, Ray Bell, Rebekah Kim, Reza Gharibi, Richard Payne, Richard W, rlms, Robert Juergens, Rok Mihevc, Roman Feldbauer, Roman Yurchak, R Sanjabi, RuchitaGarde, Ruth Waithera, Sackey, Sam Dixon, Samesh Lakhotia, Samuel Taylor, Sarra Habchi, Scott Gigante, Scott Sievert, Scott White, Sebastian Pölsterl, Sergey Feldman, SeWook Oh, she-dares, Shreya V, Shubham Mehta, Shuzhe Xiao, SimonCW, smarie, smujjiga, Sönke Behrends, Soumirai, Sourav Singh, stefan-matcovici, steinfurt, Stéphane Couvreur, Stephan Tulkens, Stephen Cowley, Stephen Tierney, SylvainLan, th0rwas, theoptips, theotheo, Thierno Ibrahima DIOP, Thomas Edwards, Thomas J Fan, Thomas Moreau, Thomas Schmitt, Tilen Kusterle, Tim Bicker, Timsaur, Tim Staley, Tirth Patel, Tola A, Tom Augspurger, Tom Dupré la Tour, topisan, Trevor Stephens, ttang131, Urvang Patel, Vathsala Achar, veerlosar, Venkatachalam N, Victor Luzgin, Vincent Jeanselme, Vincent Lostanlen, Vladimir Korolev, vnherdeiro, Wenbo Zhao, Wendy Hu, willdarnell, William de Vazelhes, wolframalpha, xavier dupré, xcjason, x-martian, xsat, xun-tang, Yinglr, yokasre, Yu-Hang “Maxin” Tang, Yulia Zamriy, Zhao Feng