Version 0.24.2¶
April 2021
Changelog¶
sklearn.compose
¶
Fix
compose.ColumnTransformer.get_feature_names
does not callget_feature_names
on transformers with an empty column selection. #19579 by Thomas Fan.
sklearn.cross_decomposition
¶
Fix Fixed a regression in
cross_decomposition.CCA
. #19646 by Thomas Fan.Fix
cross_decomposition.PLSRegression
raises warning for constant y residuals instead of aStopIteration
error. #19922 by Thomas Fan.
sklearn.decomposition
¶
Fix Fixed a bug in
decomposition.KernelPCA
’sinverse_transform
. #19732 by Kei Ishikawa.
sklearn.ensemble
¶
Fix Fixed a bug in
ensemble.HistGradientBoostingRegressor
fit
withsample_weight
parameter andleast_absolute_deviation
loss function. #19407 by Vadim Ushtanit.
sklearn.feature_extraction
¶
Fix Fixed a bug to support multiple strings for a category when
sparse=False
infeature_extraction.DictVectorizer
. #19982 by Guillaume Lemaitre.
sklearn.gaussian_process
¶
Fix Avoid explicitly forming inverse covariance matrix in
gaussian_process.GaussianProcessRegressor
when set to output standard deviation. With certain covariance matrices this inverse is unstable to compute explicitly. Calling Cholesky solver mitigates this issue in computation. #19939 by Ian Halvic.Fix Avoid division by zero when scaling constant target in
gaussian_process.GaussianProcessRegressor
. It was due to a std. dev. equal to 0. Now, such case is detected and the std. dev. is affected to 1 avoiding a division by zero and thus the presence of NaN values in the normalized target. #19703 by @sobkevich, Boris Villazón-Terrazas and Alexandr Fonari.
sklearn.linear_model
¶
Fix : Fixed a bug in
linear_model.LogisticRegression
: the sample_weight object is not modified anymore. #19182 by Yosuke KOBAYASHI.
sklearn.metrics
¶
Fix
metrics.top_k_accuracy_score
now supports multiclass problems where only two classes appear iny_true
and all the classes are specified inlabels
. #19721 by Joris Clement.
sklearn.model_selection
¶
Fix
model_selection.RandomizedSearchCV
andmodel_selection.GridSearchCV
now correctly shows the score for single metrics and verbose > 2. #19659 by Thomas Fan.Fix Some values in the
cv_results_
attribute ofmodel_selection.HalvingRandomSearchCV
andmodel_selection.HalvingGridSearchCV
were not properly converted to numpy arrays. #19211 by Nicolas Hug.Fix The
fit
method of the successive halving parameter search (model_selection.HalvingGridSearchCV
, andmodel_selection.HalvingRandomSearchCV
) now correctly handles thegroups
parameter. #19847 by Xiaoyu Chai.
sklearn.multioutput
¶
Fix
multioutput.MultiOutputRegressor
now works with estimators that dynamically definepredict
during fitting, such asensemble.StackingRegressor
. #19308 by Thomas Fan.
sklearn.preprocessing
¶
Fix Validate the constructor parameter
handle_unknown
inpreprocessing.OrdinalEncoder
to only allow for'error'
and'use_encoded_value'
strategies. #19234 byGuillaume Lemaitre <glemaitre>
.Fix Fix encoder categories having dtype=’S’
preprocessing.OneHotEncoder
andpreprocessing.OrdinalEncoder
. #19727 by Andrew Delong.Fix
preprocessing.OrdinalEncoder.transform
correctly handles unknown values for string dtypes. #19888 by Thomas Fan.Fix
preprocessing.OneHotEncoder.fit
no longer alters thedrop
parameter. #19924 by Thomas Fan.
sklearn.semi_supervised
¶
Fix Avoid NaN during label propagation in
LabelPropagation
. #19271 by Zhaowei Wang.
sklearn.tree
¶
Fix Fix a bug in
fit
oftree.BaseDecisionTree
that caused segmentation faults under certain conditions.fit
now deep copies theCriterion
object to prevent shared concurrent accesses. #19580 by Samuel Brice and Alex Adamson and Wil Yegelwel.
sklearn.utils
¶
Fix Better contains the CSS provided by
utils.estimator_html_repr
by giving CSS ids to the html representation. #19417 by Thomas Fan.
Version 0.24.1¶
January 2021
Packaging¶
The 0.24.0 scikit-learn wheels were not working with MacOS <1.15 due to
libomp
. The version of libomp
used to build the wheels was too recent for
older macOS versions. This issue has been fixed for 0.24.1 scikit-learn wheels.
Scikit-learn wheels published on PyPI.org now officially support macOS 10.13
and later.
Changelog¶
sklearn.metrics
¶
Fix Fix numerical stability bug that could happen in
metrics.adjusted_mutual_info_score
andmetrics.mutual_info_score
with NumPy 1.20+. #19179 by Thomas Fan.
sklearn.semi_supervised
¶
Fix
semi_supervised.SelfTrainingClassifier
is now accepting meta-estimator (e.g.ensemble.StackingClassifier
). The validation of this estimator is done on the fitted estimator, once we know the existence of the methodpredict_proba
. #19126 by Guillaume Lemaitre.
Version 0.24.0¶
December 2020
For a short description of the main highlights of the release, please refer to Release Highlights for scikit-learn 0.24.
Legend for changelogs¶
Major Feature : something big that you couldn’t do before.
Feature : something that you couldn’t do before.
Efficiency : an existing feature now may not require as much computation or memory.
Enhancement : a miscellaneous minor improvement.
Fix : something that previously didn’t work as documentated – or according to reasonable expectations – should now work.
API Change : you will need to change your code to have the same effect in the future; or a feature will be removed in the future.
Put the changes in their relevant module.
Changed models¶
The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures.
Fix
decomposition.KernelPCA
behaviour is now more consistent between 32-bits and 64-bits data when the kernel has small positive eigenvalues.Fix
decomposition.TruncatedSVD
becomes deterministic by exposing arandom_state
parameter.Fix
linear_model.Perceptron
whenpenalty='elasticnet'
.Fix Change in the random sampling procedures for the center initialization of
cluster.KMeans
.
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we cannot assure that this list is complete.)
Changelog¶
sklearn.base
¶
Fix
base.BaseEstimator.get_params
now will raise anAttributeError
if a parameter cannot be retrieved as an instance attribute. Previously it would returnNone
. #17448 by Juan Carlos Alfaro Jiménez.
sklearn.calibration
¶
Efficiency
calibration.CalibratedClassifierCV.fit
now supports parallelization viajoblib.Parallel
using argumentn_jobs
. #17107 by Julien Jerphanion.Enhancement Allow
calibration.CalibratedClassifierCV
use with prefitpipeline.Pipeline
where data is notX
is not array-like, sparse matrix or dataframe at the start. #17546 by Lucy Liu.Enhancement Add
ensemble
parameter tocalibration.CalibratedClassifierCV
, which enables implementation of calibration via an ensemble of calibrators (current method) or just one calibrator using all the data (similar to the built-in feature ofsklearn.svm
estimators with theprobabilities=True
parameter). #17856 by Lucy Liu and Andrea Esuli.
sklearn.cluster
¶
Enhancement
cluster.AgglomerativeClustering
has a new parametercompute_distances
. When set toTrue
, distances between clusters are computed and stored in thedistances_
attribute even when the parameterdistance_threshold
is not used. This new parameter is useful to produce dendrogram visualizations, but introduces a computational and memory overhead. #17984 by Michael Riedmann, Emilie Delattre, and Francesco Casalegno.Enhancement
cluster.SpectralClustering
andcluster.spectral_clustering
have a new keyword argumentverbose
. When set toTrue
, additional messages will be displayed which can aid with debugging. #18052 by Sean O. Stalley.Enhancement Added
cluster.kmeans_plusplus
as public function. Initialization by KMeans++ can now be called separately to generate initial cluster centroids. #17937 by @g-walshAPI Change
cluster.MiniBatchKMeans
attributes,counts_
andinit_size_
, are deprecated and will be removed in 1.1 (renaming of 0.26). #17864 by Jérémie du Boisberranger.
sklearn.compose
¶
Fix
compose.ColumnTransformer
will skip transformers the column selector is a list of bools that are False. #17616 by Thomas Fan.Fix
compose.ColumnTransformer
now displays the remainder in the diagram display. #18167 by Thomas Fan.Fix
compose.ColumnTransformer
enforces strict count and order of column names betweenfit
andtransform
by raising an error instead of a warning, following the deprecation cycle. #18256 by Madhura Jayratne.
sklearn.covariance
¶
API Change Deprecates
cv_alphas_
in favor ofcv_results_['alphas']
andgrid_scores_
in favor of split scores incv_results_
incovariance.GraphicalLassoCV
.cv_alphas_
andgrid_scores_
will be removed in version 1.1 (renaming of 0.26). #16392 by Thomas Fan.
sklearn.cross_decomposition
¶
Fix Fixed a bug in
cross_decomposition.PLSSVD
which would sometimes return components in the reversed order of importance. #17095 by Nicolas Hug.Fix Fixed a bug in
cross_decomposition.PLSSVD
,cross_decomposition.CCA
, andcross_decomposition.PLSCanonical
, which would lead to incorrect predictions forest.transform(Y)
when the training data is single-target. #17095 by Nicolas Hug.Fix Increases the stability of
cross_decomposition.CCA
#18746 by Thomas Fan.API Change The bounds of the
n_components
parameter is now restricted:into
[1, min(n_samples, n_features, n_targets)]
, forcross_decomposition.PLSSVD
,cross_decomposition.CCA
, andcross_decomposition.PLSCanonical
.into
[1, n_features]
orcross_decomposition.PLSRegression
.
An error will be raised in 1.1 (renaming of 0.26). #17095 by Nicolas Hug.
API Change For
cross_decomposition.PLSSVD
,cross_decomposition.CCA
, andcross_decomposition.PLSCanonical
, thex_scores_
andy_scores_
attributes were deprecated and will be removed in 1.1 (renaming of 0.26). They can be retrieved by callingtransform
on the training data. Thenorm_y_weights
attribute will also be removed. #17095 by Nicolas Hug.API Change For
cross_decomposition.PLSRegression
,cross_decomposition.PLSCanonical
,cross_decomposition.CCA
, andcross_decomposition.PLSSVD
, thex_mean_
,y_mean_
,x_std_
, andy_std_
attributes were deprecated and will be removed in 1.1 (renaming of 0.26). #18768 by Maren Westermann.Fix
decomposition.TruncatedSVD
becomes deterministic by using therandom_state
. It controls the weights’ initialization of the underlying ARPACK solver. :pr:` #18302` by Gaurav Desai and Ivan Panico.
sklearn.datasets
¶
Feature
datasets.fetch_openml
now validates md5 checksum of arff files downloaded or cached to ensure data integrity. #14800 by Shashank Singh and Joel Nothman.Enhancement
datasets.fetch_openml
now allows argumentas_frame
to be ‘auto’, which tries to convert returned data to pandas DataFrame unless data is sparse. #17396 by Jiaxiang.Enhancement
datasets.fetch_covtype
now supports the optional argumentas_frame
; when it is set to True, the returned Bunch object’sdata
andframe
members are pandas DataFrames, and thetarget
member is a pandas Series. #17491 by Alex Liang.Enhancement
datasets.fetch_kddcup99
now supports the optional argumentas_frame
; when it is set to True, the returned Bunch object’sdata
andframe
members are pandas DataFrames, and thetarget
member is a pandas Series. #18280 by Alex Liang and Guillaume Lemaitre.Enhancement
datasets.fetch_20newsgroups_vectorized
now supports loading as a pandasDataFrame
by settingas_frame=True
. #17499 by Brigitta Sipőcz and Guillaume Lemaitre.API Change The default value of
as_frame
indatasets.fetch_openml
is changed from False to ‘auto’. #17610 by Jiaxiang.
sklearn.decomposition
¶
API Change For
decomposition.NMF
, theinit
value, when ‘init=None’ and n_components <= min(n_samples, n_features) will be changed from'nndsvd'
to'nndsvda'
in 1.1 (renaming of 0.26). #18525 by Chiara Marmo.Enhancement
decomposition.FactorAnalysis
now supports the optional argumentrotation
, which can take the valueNone
,'varimax'
or'quartimax'
. #11064 by Jona Sassenhagen.Enhancement
decomposition.NMF
now supports the optional parameterregularization
, which can take the valuesNone
, ‘components’, ‘transformation’ or ‘both’, in accordance withdecomposition.NMF.non_negative_factorization
. #17414 by Bharat Raghunathan.Fix
decomposition.KernelPCA
behaviour is now more consistent between 32-bits and 64-bits data input when the kernel has small positive eigenvalues. Small positive eigenvalues were not correctly discarded for 32-bits data. #18149 by Sylvain Marié.Fix Fix
decomposition.SparseCoder
such that it follows scikit-learn API and support cloning. The attributecomponents_
is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). This attribute was redundant with thedictionary
attribute and constructor parameter. #17679 by Xavier Dupré.Fix
decomposition.TruncatedSVD.fit_transform
consistently returns the same asdecomposition.TruncatedSVD.fit
followed bydecomposition.TruncatedSVD.transform
. #18528 by Albert Villanova del Moral and Ruifeng Zheng.
sklearn.discriminant_analysis
¶
Enhancement
discriminant_analysis.LinearDiscriminantAnalysis
can now use custom covariance estimate by setting thecovariance_estimator
parameter. #14446 by Hugo Richard.
sklearn.ensemble
¶
Major Feature
ensemble.HistGradientBoostingRegressor
andensemble.HistGradientBoostingClassifier
now have native support for categorical features with thecategorical_features
parameter. #18394 by Nicolas Hug and Thomas Fan.Feature
ensemble.HistGradientBoostingRegressor
andensemble.HistGradientBoostingClassifier
now support the methodstaged_predict
, which allows monitoring of each stage. #16985 by Hao Chun Chang.Efficiency break cyclic references in the tree nodes used internally in
ensemble.HistGradientBoostingRegressor
andensemble.HistGradientBoostingClassifier
to allow for the timely garbage collection of large intermediate datastructures and to improve memory usage infit
. #18334 by Olivier Grisel Nicolas Hug, Thomas Fan and Andreas Müller.Efficiency Histogram initialization is now done in parallel in
ensemble.HistGradientBoostingRegressor
andensemble.HistGradientBoostingClassifier
which results in speed improvement for problems that build a lot of nodes on multicore machines. #18341 by Olivier Grisel, Nicolas Hug, Thomas Fan, and Egor Smirnov.Fix Fixed a bug in
ensemble.HistGradientBoostingRegressor
andensemble.HistGradientBoostingClassifier
which can now accept data withuint8
dtype inpredict
. #18410 by Nicolas Hug.API Change The parameter
n_classes_
is now deprecated inensemble.GradientBoostingRegressor
and returns1
. #17702 by Simona Maggio.API Change Mean absolute error (‘mae’) is now deprecated for the parameter
criterion
inensemble.GradientBoostingRegressor
andensemble.GradientBoostingClassifier
. #18326 by Madhura Jayaratne.
sklearn.exceptions
¶
API Change
exceptions.ChangedBehaviorWarning
andexceptions.NonBLASDotWarning
are deprecated and will be removed in 1.1 (renaming of 0.26). #17804 by Adrin Jalali.
sklearn.feature_extraction
¶
Enhancement
feature_extraction.DictVectorizer
accepts multiple values for one categorical feature. #17367 by Peng Yu and Chiara Marmo.Fix
feature_extraction.text.CountVectorizer
raises an issue if a custom token pattern which capture more than one group is provided. #15427 by Gangesh Gudmalwar and Erin R Hoffman.
sklearn.feature_selection
¶
Feature Added
feature_selection.SequentialFeatureSelector
which implements forward and backward sequential feature selection. #6545 by Sebastian Raschka and #17159 by Nicolas Hug.Feature A new parameter
importance_getter
was added tofeature_selection.RFE
,feature_selection.RFECV
andfeature_selection.SelectFromModel
, allowing the user to specify an attribute name/path or acallable
for extracting feature importance from the estimator. #15361 by Venkatachalam N.Efficiency Reduce memory footprint in
feature_selection.mutual_info_classif
andfeature_selection.mutual_info_regression
by callingneighbors.KDTree
for counting nearest neighbors. #17878 by Noel Rogers.Enhancement
feature_selection.RFE
supports the option for the number ofn_features_to_select
to be given as a float representing the percentage of features to select. #17090 by Lisa Schwetlick and Marija Vlajic Wheeler.
sklearn.gaussian_process
¶
Enhancement A new method
gaussian_process.kernel._check_bounds_params
is called after fitting a Gaussian Process and raises aConvergenceWarning
if the bounds of the hyperparameters are too tight. #12638 by Sylvain Lannuzel.
sklearn.impute
¶
Feature
impute.SimpleImputer
now supports a list of strings whenstrategy='most_frequent'
orstrategy='constant'
. #17526 by Ayako YAGI and Juan Carlos Alfaro Jiménez.Feature Added method
impute.SimpleImputer.inverse_transform
to revert imputed data to original when instantiated withadd_indicator=True
. #17612 by Srimukh Sripada.Fix replace the default values in
impute.IterativeImputer
ofmin_value
andmax_value
parameters to-np.inf
andnp.inf
, respectively instead ofNone
. However, the behaviour of the class does not change sinceNone
was defaulting to these values already. #16493 by Darshan N.Fix
impute.IterativeImputer
will not attempt to set the estimator’srandom_state
attribute, allowing to use it with more external classes. #15636 by David Cortes.Efficiency
impute.SimpleImputer
is now faster withobject
dtype array. whenstrategy='most_frequent'
inSimpleImputer
. #18987 by David Katz.
sklearn.inspection
¶
Feature
inspection.partial_dependence
andinspection.plot_partial_dependence
now support calculating and plotting Individual Conditional Expectation (ICE) curves controlled by thekind
parameter. #16619 by Madhura Jayratne.Feature Add
sample_weight
parameter toinspection.permutation_importance
. #16906 by Roei Kahny.API Change Positional arguments are deprecated in
inspection.PartialDependenceDisplay.plot
and will error in 1.1 (renaming of 0.26). #18293 by Thomas Fan.
sklearn.isotonic
¶
Feature Expose fitted attributes
X_thresholds_
andy_thresholds_
that hold the de-duplicated interpolation thresholds of anisotonic.IsotonicRegression
instance for model inspection purpose. #16289 by Masashi Kishimoto and Olivier Grisel.Enhancement
isotonic.IsotonicRegression
now accepts 2d array with 1 feature as input array. #17379 by Jiaxiang.Fix Add tolerance when determining duplicate X values to prevent inf values from being predicted by
isotonic.IsotonicRegression
. #18639 by Lucy Liu.
sklearn.kernel_approximation
¶
Feature Added class
kernel_approximation.PolynomialCountSketch
which implements the Tensor Sketch algorithm for polynomial kernel feature map approximation. #13003 by Daniel López Sánchez.Efficiency
kernel_approximation.Nystroem
now supports parallelization viajoblib.Parallel
using argumentn_jobs
. #18545 by Laurenz Reitsam.
sklearn.linear_model
¶
Feature
linear_model.LinearRegression
now forces coefficients to be all positive whenpositive
is set toTrue
. #17578 by Joseph Knox, Nelle Varoquaux and Chiara Marmo.Enhancement
linear_model.RidgeCV
now supports finding an optimal regularization valuealpha
for each target separately by settingalpha_per_target=True
. This is only supported when using the default efficient leave-one-out cross-validation schemecv=None
. #6624 by Marijn van Vliet.Fix Fixes bug in
linear_model.TheilSenRegressor
wherepredict
andscore
would fail whenfit_intercept=False
and there was one feature during fitting. #18121 by Thomas Fan.Fix Fixes bug in
linear_model.ARDRegression
wherepredict
was raising an error whennormalize=True
andreturn_std=True
becauseX_offset_
andX_scale_
were undefined. #18607 by fhaselbeck.Fix Added the missing
l1_ratio
parameter inlinear_model.Perceptron
, to be used whenpenalty='elasticnet'
. This changes the default from 0 to 0.15. #18622 by Haesun Park.
sklearn.manifold
¶
Efficiency Fixed #10493. Improve Local Linear Embedding (LLE) that raised
MemoryError
exception when used with large inputs. #17997 by Bertrand Maisonneuve.Enhancement Add
square_distances
parameter tomanifold.TSNE
, which provides backward compatibility during deprecation of legacy squaring behavior. Distances will be squared by default in 1.1 (renaming of 0.26), and this parameter will be removed in 1.3. #17662 by Joshua Newton.Fix
manifold.MDS
now correctly sets its_pairwise
attribute. #18278 by Thomas Fan.
sklearn.metrics
¶
Feature Added
metrics.cluster.pair_confusion_matrix
implementing the confusion matrix arising from pairs of elements from two clusterings. #17412 by Uwe F Mayer.Feature new metric
metrics.top_k_accuracy_score
. It’s a generalization ofmetrics.top_k_accuracy_score
, the difference is that a prediction is considered correct as long as the true label is associated with one of thek
highest predicted scores.metrics.accuracy_score
is the special case ofk = 1
. #16625 by Geoffrey Bolmier.Feature Added
metrics.det_curve
to compute Detection Error Tradeoff curve classification metric. #10591 by Jeremy Karnowski and Daniel Mohns.Feature Added
metrics.plot_det_curve
andmetrics.DetCurveDisplay
to ease the plot of DET curves. #18176 by Guillaume Lemaitre.Feature Added
metrics.mean_absolute_percentage_error
metric and the associated scorer for regression problems. #10708 fixed with the PR #15007 by Ashutosh Hathidara. The scorer and some practical test cases were taken from PR #10711 by Mohamed Ali Jamaoui.Feature Added
metrics.rand_score
implementing the (unadjusted) Rand index. #17412 by Uwe F Mayer.Feature
metrics.plot_confusion_matrix
now supports making colorbar optional in the matplotlib plot by settingcolorbar=False
. #17192 by Avi GuptaEnhancement Add
sample_weight
parameter tometrics.median_absolute_error
. #17225 by Lucy Liu.Enhancement Add
pos_label
parameter inmetrics.plot_precision_recall_curve
in order to specify the positive class to be used when computing the precision and recall statistics. #17569 by Guillaume Lemaitre.Enhancement Add
pos_label
parameter inmetrics.plot_roc_curve
in order to specify the positive class to be used when computing the roc auc statistics. #17651 by Clara Matos.Fix Fixed a bug in
metrics.classification_report
which was raising AttributeError when called withoutput_dict=True
for 0-length values. #17777 by Shubhanshu Mishra.Fix Fixed a bug in
metrics.classification_report
which was raising AttributeError when called withoutput_dict=True
for 0-length values. #17777 by Shubhanshu Mishra.Fix Fixed a bug in
metrics.jaccard_score
which recommended thezero_division
parameter when called with no true or predicted samples. #17826 by Richard Decal and Joseph WillardFix bug in
metrics.hinge_loss
where error occurs wheny_true
is missing some labels that are provided explicitly in thelabels
parameter. #17935 by Cary Goltermann.Fix Fix scorers that accept a pos_label parameter and compute their metrics from values returned by
decision_function
orpredict_proba
. Previously, they would return erroneous values when pos_label was not corresponding toclassifier.classes_[1]
. This is especially important when training classifiers directly with string labeled target classes. #18114 by Guillaume Lemaitre.Fix Fixed bug in
metrics.plot_confusion_matrix
where error occurs wheny_true
contains labels that were not previously seen by the classifier while thelabels
anddisplay_labels
parameters are set toNone
. #18405 by Thomas J. Fan and Yakov Pchelintsev.
sklearn.model_selection
¶
Major Feature Added (experimental) parameter search estimators
model_selection.HalvingRandomSearchCV
andmodel_selection.HalvingGridSearchCV
which implement Successive Halving, and can be used as a drop-in replacements formodel_selection.RandomizedSearchCV
andmodel_selection.GridSearchCV
. #13900 by Nicolas Hug, Joel Nothman and Andreas Müller.Feature
model_selection.RandomizedSearchCV
andmodel_selection.GridSearchCV
now have the methodscore_samples
#17478 by Teon Brooks and Mohamed Maskani.Enhancement
model_selection.TimeSeriesSplit
has two new keyword argumentstest_size
andgap
.test_size
allows the out-of-sample time series length to be fixed for all folds.gap
removes a fixed number of samples between the train and test set on each fold. #13204 by Kyle Kosic.Enhancement
model_selection.permutation_test_score
andmodel_selection.validation_curve
now accept fit_params to pass additional estimator parameters. #18527 by Gaurav Dhingra, Julien Jerphanion and Amanda Dsouza.Enhancement
model_selection.cross_val_score
,model_selection.cross_validate
,model_selection.GridSearchCV
, andmodel_selection.RandomizedSearchCV
allows estimator to fail scoring and replace the score witherror_score
. Iferror_score="raise"
, the error will be raised. #18343 by Guillaume Lemaitre and Devi Sandeep.Enhancement
model_selection.learning_curve
now accept fit_params to pass additional estimator parameters. #18595 by Amanda Dsouza.Fix Fixed the
len
ofmodel_selection.ParameterSampler
when all distributions are lists andn_iter
is more than the number of unique parameter combinations. #18222 by Nicolas Hug.Fix A fix to raise warning when one or more CV splits of
model_selection.GridSearchCV
andmodel_selection.RandomizedSearchCV
results in non-finite scores. #18266 by Subrat Sahu, Nirvan and Arthur Book.Enhancement
model_selection.GridSearchCV
,model_selection.RandomizedSearchCV
andmodel_selection.cross_validate
supportscoring
being a callable returning a dictionary of of multiple metric names/values association. #15126 by Thomas Fan.
sklearn.multiclass
¶
Enhancement
multiclass.OneVsOneClassifier
now accepts the inputs with missing values. Hence, estimators which can handle missing values (may be a pipeline with imputation step) can be used as a estimator for multiclass wrappers. #17987 by Venkatachalam N.Fix A fix to allow
multiclass.OutputCodeClassifier
to accept sparse input data in itsfit
andpredict
methods. The check for validity of the input is now delegated to the base estimator. #17233 by Zolisa Bleki.
sklearn.multioutput
¶
Enhancement
multioutput.MultiOutputClassifier
andmultioutput.MultiOutputRegressor
now accepts the inputs with missing values. Hence, estimators which can handle missing values (may be a pipeline with imputation step, HistGradientBoosting estimators) can be used as a estimator for multiclass wrappers. #17987 by Venkatachalam N.Fix A fix to accept tuples for the
order
parameter inmultioutput.ClassifierChain
. #18124 by Gus Brocchini and Amanda Dsouza.
sklearn.naive_bayes
¶
Enhancement Adds a parameter
min_categories
tonaive_bayes.CategoricalNB
that allows a minimum number of categories per feature to be specified. This allows categories unseen during training to be accounted for. #16326 by George Armstrong.API Change The attributes
coef_
andintercept_
are now deprecated innaive_bayes.MultinomialNB
,naive_bayes.ComplementNB
,naive_bayes.BernoulliNB
andnaive_bayes.CategoricalNB
, and will be removed in v1.1 (renaming of 0.26). #17427 by Juan Carlos Alfaro Jiménez.
sklearn.neighbors
¶
Efficiency Speed up
seuclidean
,wminkowski
,mahalanobis
andhaversine
metrics inneighbors.DistanceMetric
by avoiding unexpected GIL acquiring in Cython when settingn_jobs>1
inneighbors.KNeighborsClassifier
,neighbors.KNeighborsRegressor
,neighbors.RadiusNeighborsClassifier
,neighbors.RadiusNeighborsRegressor
,metrics.pairwise_distances
and by validating data out of loops. #17038 by Wenbo Zhao.Efficiency
neighbors.NeighborsBase
benefits of an improvedalgorithm = 'auto'
heuristic. In addition to the previous set of rules, now, when the number of features exceeds 15,brute
is selected, assuming the data intrinsic dimensionality is too high for tree-based methods. #17148 by Geoffrey Bolmier.Fix
neighbors.BinaryTree
will raise aValueError
when fitting on data array having points with different dimensions. #18691 by Chiara Marmo.Fix
neighbors.NearestCentroid
with a numericalshrink_threshold
will raise aValueError
when fitting on data with all constant features. #18370 by Trevor Waite.Fix In methods
radius_neighbors
andradius_neighbors_graph
ofneighbors.NearestNeighbors
,neighbors.RadiusNeighborsClassifier
,neighbors.RadiusNeighborsRegressor
, andneighbors.RadiusNeighborsTransformer
, usingsort_results=True
now correctly sorts the results even when fitting with the “brute” algorithm. #18612 by Tom Dupre la Tour.
sklearn.neural_network
¶
Efficiency Neural net training and prediction are now a little faster. #17603, #17604, #17606, #17608, #17609, #17633, #17661, #17932 by Alex Henrie.
Enhancement Avoid converting float32 input to float64 in
neural_network.BernoulliRBM
. #16352 by Arthur Imbert.Enhancement Support 32-bit computations in
neural_network.MLPClassifier
andneural_network.MLPRegressor
. #17759 by Srimukh Sripada.Fix Fix method
neural_network.MLPClassifier.fit
not iterating tomax_iter
if warm started. #18269 by Norbert Preining and Guillaume Lemaitre.
sklearn.pipeline
¶
Enhancement References to transformers passed through
transformer_weights
topipeline.FeatureUnion
that aren’t present intransformer_list
will raise aValueError
. #17876 by Cary Goltermann.Fix A slice of a
pipeline.Pipeline
now inherits the parameters of the original pipeline (memory
andverbose
). #18429 by Albert Villanova del Moral and Paweł Biernat.
sklearn.preprocessing
¶
Feature
preprocessing.OneHotEncoder
now supports missing values by treating them as a category. #17317 by Thomas Fan.Feature Add a new
handle_unknown
parameter with ause_encoded_value
option, along with a newunknown_value
parameter, topreprocessing.OrdinalEncoder
to allow unknown categories during transform and set the encoded value of the unknown categories. #17406 by Felix Wick and #18406 by Nicolas Hug.Feature Add
clip
parameter topreprocessing.MinMaxScaler
, which clips the transformed values of test data tofeature_range
. #17833 by Yashika Sharma.Feature Add
sample_weight
parameter topreprocessing.StandardScaler
. Allows setting individual weights for each sample. #18510 and #18447 and #16066 and #18682 by Maria Telenczuk and Albert Villanova and @panpiort8 and Alex Gramfort.Enhancement Verbose output of
model_selection.GridSearchCV
has been improved for readability. #16935 by Raghav Rajagopalan and Chiara Marmo.Enhancement Add
unit_variance
topreprocessing.RobustScaler
, which scales output data such that normally distributed features have a variance of 1. #17193 by Lucy Liu and Mabel Villalba.Enhancement Add
dtype
parameter topreprocessing.KBinsDiscretizer
. #16335 by Arthur Imbert.Fix Raise error on
sklearn.preprocessing.OneHotEncoder.inverse_transform
whenhandle_unknown='error'
anddrop=None
for samples encoded as all zeros. #14982 by Kevin Winata.
sklearn.semi_supervised
¶
Major Feature Added
semi_supervised.SelfTrainingClassifier
, a meta-classifier that allows any supervised classifier to function as a semi-supervised classifier that can learn from unlabeled data. #11682 by Oliver Rausch and Patrice Becker.Fix Fix incorrect encoding when using unicode string dtypes in
preprocessing.OneHotEncoder
andpreprocessing.OrdinalEncoder
. #15763 by Thomas Fan.
sklearn.svm
¶
Enhancement invoke SciPy BLAS API for SVM kernel function in
fit
,predict
and related methods ofsvm.SVC
,svm.NuSVC
,svm.SVR
,svm.NuSVR
,svm.OneClassSVM
. #16530 by Shuhua Fan.
sklearn.tree
¶
Feature
tree.DecisionTreeRegressor
now supports the new splitting criterion'poisson'
useful for modeling count data. #17386 by Christian Lorentzen.Enhancement
tree.plot_tree
now uses colors from the matplotlib configuration settings. #17187 by Andreas Müller.API Change The parameter
X_idx_sorted
is now deprecated intree.DecisionTreeClassifier.fit
andtree.DecisionTreeRegressor.fit
, and has not effect. #17614 by Juan Carlos Alfaro Jiménez.
sklearn.utils
¶
Enhancement Add
check_methods_sample_order_invariance
tocheck_estimator
, which checks that estimator methods are invariant if applied to the same dataset with different sample order #17598 by Jason Ngo.Enhancement Add support for weights in
utils.sparse_func.incr_mean_variance_axis
. By Maria Telenczuk and Alex Gramfort.Fix Raise ValueError with clear error message in
utils.check_array
for sparse DataFrames with mixed types. #17992 by Thomas J. Fan and Alex Shacked.Fix Allow serialized tree based models to be unpickled on a machine with different endianness. #17644 by Qi Zhang.
Fix Check that we raise proper error when axis=1 and the dimensions do not match in
utils.sparse_func.incr_mean_variance_axis
. By Alex Gramfort.
Miscellaneous¶
Code and Documentation Contributors¶
Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0.23, including:
Abo7atm, Adam Spannbauer, Adrin Jalali, adrinjalali, Agamemnon Krasoulis, Akshay Deodhar, Albert Villanova del Moral, Alessandro Gentile, Alex Henrie, Alex Itkes, Alex Liang, Alexander Lenail, alexandracraciun, Alexandre Gramfort, alexshacked, Allan D Butler, Amanda Dsouza, amy12xx, Anand Tiwari, Anderson Nelson, Andreas Mueller, Ankit Choraria, Archana Subramaniyan, Arthur Imbert, Ashutosh Hathidara, Ashutosh Kushwaha, Atsushi Nukariya, Aura Munoz, AutoViz and Auto_ViML, Avi Gupta, Avinash Anakal, Ayako YAGI, barankarakus, barberogaston, beatrizsmg, Ben Mainye, Benjamin Bossan, Benjamin Pedigo, Bharat Raghunathan, Bhavika Devnani, Biprateep Dey, bmaisonn, Bo Chang, Boris Villazón-Terrazas, brigi, Brigitta Sipőcz, Bruno Charron, Byron Smith, Cary Goltermann, Cat Chenal, CeeThinwa, chaitanyamogal, Charles Patel, Chiara Marmo, Christian Kastner, Christian Lorentzen, Christoph Deil, Christos Aridas, Clara Matos, clmbst, Coelhudo, crispinlogan, Cristina Mulas, Daniel López, Daniel Mohns, darioka, Darshan N, david-cortes, Declan O’Neill, Deeksha Madan, Elizabeth DuPre, Eric Fiegel, Eric Larson, Erich Schubert, Erin Khoo, Erin R Hoffman, eschibli, Felix Wick, fhaselbeck, Forrest Koch, Francesco Casalegno, Frans Larsson, Gael Varoquaux, Gaurav Desai, Gaurav Sheni, genvalen, Geoffrey Bolmier, George Armstrong, George Kiragu, Gesa Stupperich, Ghislain Antony Vaillant, Gim Seng, Gordon Walsh, Gregory R. Lee, Guillaume Chevalier, Guillaume Lemaitre, Haesun Park, Hannah Bohle, Hao Chun Chang, Harry Scholes, Harsh Soni, Henry, Hirofumi Suzuki, Hitesh Somani, Hoda1394, Hugo Le Moine, hugorichard, indecisiveuser, Isuru Fernando, Ivan Wiryadi, j0rd1smit, Jaehyun Ahn, Jake Tae, James Hoctor, Jan Vesely, Jeevan Anand Anne, JeroenPeterBos, JHayes, Jiaxiang, Jie Zheng, Jigna Panchal, jim0421, Jin Li, Joaquin Vanschoren, Joel Nothman, Jona Sassenhagen, Jonathan, Jorge Gorbe Moya, Joseph Lucas, Joshua Newton, Juan Carlos Alfaro Jiménez, Julien Jerphanion, Justin Huber, Jérémie du Boisberranger, Kartik Chugh, Katarina Slama, kaylani2, Kendrick Cetina, Kenny Huynh, Kevin Markham, Kevin Winata, Kiril Isakov, kishimoto, Koki Nishihara, Krum Arnaudov, Kyle Kosic, Lauren Oldja, Laurenz Reitsam, Lisa Schwetlick, Louis Douge, Louis Guitton, Lucy Liu, Madhura Jayaratne, maikia, Manimaran, Manuel López-Ibáñez, Maren Westermann, Maria Telenczuk, Mariam-ke, Marijn van Vliet, Markus Löning, Martin Scheubrein, Martina G. Vilas, Martina Megasari, Mateusz Górski, mathschy, mathurinm, Matthias Bussonnier, Max Del Giudice, Michael, Milan Straka, Muoki Caleb, N. Haiat, Nadia Tahiri, Ph. D, Naoki Hamada, Neil Botelho, Nicolas Hug, Nils Werner, noelano, Norbert Preining, oj_lappi, Oleh Kozynets, Olivier Grisel, Pankaj Jindal, Pardeep Singh, Parthiv Chigurupati, Patrice Becker, Pete Green, pgithubs, Poorna Kumar, Prabakaran Kumaresshan, Probinette4, pspachtholz, pwalchessen, Qi Zhang, rachel fischoff, Rachit Toshniwal, Rafey Iqbal Rahman, Rahul Jakhar, Ram Rachum, RamyaNP, rauwuckl, Ravi Kiran Boggavarapu, Ray Bell, Reshama Shaikh, Richard Decal, Rishi Advani, Rithvik Rao, Rob Romijnders, roei, Romain Tavenard, Roman Yurchak, Ruby Werman, Ryotaro Tsukada, sadak, Saket Khandelwal, Sam, Sam Ezebunandu, Sam Kimbinyi, Sarah Brown, Saurabh Jain, Sean O. Stalley, Sergio, Shail Shah, Shane Keller, Shao Yang Hong, Shashank Singh, Shooter23, Shubhanshu Mishra, simonamaggio, Soledad Galli, Srimukh Sripada, Stephan Steinfurt, subrat93, Sunitha Selvan, Swier, Sylvain Marié, SylvainLan, t-kusanagi2, Teon L Brooks, Terence Honles, Thijs van den Berg, Thomas J Fan, Thomas J. Fan, Thomas S Benjamin, Thomas9292, Thorben Jensen, tijanajovanovic, Timo Kaufmann, tnwei, Tom Dupré la Tour, Trevor Waite, ufmayer, Umberto Lupo, Venkatachalam N, Vikas Pandey, Vinicius Rios Fuck, Violeta, watchtheblur, Wenbo Zhao, willpeppo, xavier dupré, Xethan, Xue Qianming, xun-tang, yagi-3, Yakov Pchelintsev, Yashika Sharma, Yi-Yan Ge, Yue Wu, Yutaro Ikeda, Zaccharie Ramzi, zoj613, Zhao Feng.