Release history¶
Version 0.19.1¶
October, 2017
This is a bug-fix release with some minor documentation improvements and enhancements to features released in 0.19.0.
Note there may be minor differences in TSNE output in this release (due to #9623), in the case where multiple samples have equal distance to some sample.
Changelog¶
API changes¶
- Reverted the addition of
metrics.ndcg_score
andmetrics.dcg_score
which had been merged into version 0.19.0 by error. The implementations were broken and undocumented. return_train_score
which was added tomodel_selection.GridSearchCV
,model_selection.RandomizedSearchCV
andmodel_selection.cross_validate
in version 0.19.0 will be changing its default value from True to False in version 0.21. We found that calculating training score could have a great effect on cross validation runtime in some cases. Users should explicitly setreturn_train_score
to False if prediction or scoring functions are slow, resulting in a deleterious effect on CV runtime, or to True if they wish to use the calculated scores. #9677 by Kumar Ashutosh and Joel Nothman.correlation_models
andregression_models
from the legacy gaussian processes implementation have been belatedly deprecated. #9717 by Kumar Ashutosh.
Bug fixes¶
- Avoid integer overflows in
metrics.matthews_corrcoef
. #9693 by Sam Steingold. - Fix ValueError in
preprocessing.LabelEncoder
when usinginverse_transform
on unseen labels. #9816 by Charlie Newey. - Fixed a bug in the objective function for
manifold.TSNE
(both exact and with the Barnes-Hut approximation) whenn_components >= 3
. #9711 by @goncalo-rodrigues. - Fix regression in
model_selection.cross_val_predict
where it raised an error withmethod='predict_proba'
for some probabilistic classifiers. #9641 by James Bourbeau. - Fixed a bug where
datasets.make_classification
modified its inputweights
. #9865 by Sachin Kelkar. model_selection.StratifiedShuffleSplit
now works with multioutput multiclass or multilabel data with more than 1000 columns. #9922 by Charlie Brummitt.- Fixed a bug with nested and conditional parameter setting, e.g. setting a pipeline step and its parameter at the same time. #9945 by Andreas Müller and Joel Nothman.
Regressions in 0.19.0 fixed in 0.19.1:
- Fixed a bug where parallelised prediction in random forests was not thread-safe and could (rarely) result in arbitrary errors. #9830 by Joel Nothman.
- Fix regression in
model_selection.cross_val_predict
where it no longer acceptedX
as a list. #9600 by Rasul Kerimov. - Fixed handling of
model_selection.cross_val_predict
for binary classification withmethod='decision_function'
. #9593 by Reiichiro Nakano and core devs. - Fix regression in
pipeline.Pipeline
where it no longer acceptedsteps
as a tuple. #9604 by Joris Van den Bossche. - Fix bug where
n_iter
was not properly deprecated, leavingn_iter
unavailable for interim use inlinear_model.SGDClassifier
,linear_model.SGDRegressor
,linear_model.PassiveAggressiveClassifier
,linear_model.PassiveAggressiveRegressor
andlinear_model.Perceptron
. #9558 by Andreas Müller. - Dataset fetchers make sure temporary files are closed before removing them, which caused errors on Windows. #9847 by Joan Massich.
- Fixed a regression in
manifold.TSNE
where it no longer supported metrics other than ‘euclidean’ and ‘precomputed’. #9623 by Oli Blum.
Enhancements¶
- Our test suite and
utils.estimator_checks.check_estimators
can now be run without Nose installed. #9697 by Joan Massich. - To improve usability of version 0.19’s
pipeline.Pipeline
caching,memory
now allowsjoblib.Memory
instances. This make use of the newutils.validation.check_memory
helper. #9584 by Kumar Ashutosh - Some fixes to examples: #9750, #9788, #9815
- Made a FutureWarning in SGD-based estimators less verbose. #9802 by Vrishank Bhardwaj.
Code and Documentation Contributors¶
With thanks to:
Joel Nothman, Loic Esteve, Andreas Mueller, Kumar Ashutosh, Vrishank Bhardwaj, Hanmin Qin, Rasul Kerimov, James Bourbeau, Nagarjuna Kumar, Nathaniel Saul, Olivier Grisel, Roman Yurchak, Reiichiro Nakano, Sachin Kelkar, Sam Steingold, Yaroslav Halchenko, diegodlh, felix, goncalo-rodrigues, jkleint, oliblum90, pasbi, Anthony Gitter, Ben Lawson, Charlie Brummitt, Didi Bar-Zev, Gael Varoquaux, Joan Massich, Joris Van den Bossche, nielsenmarkus11
Version 0.19¶
August 12, 2017
Highlights¶
We are excited to release a number of great new features including
neighbors.LocalOutlierFactor
for anomaly detection,
preprocessing.QuantileTransformer
for robust feature transformation,
and the multioutput.ClassifierChain
meta-estimator to simply account
for dependencies between classes in multilabel problems. We have some new
algorithms in existing estimators, such as multiplicative update in
decomposition.NMF
and multinomial
linear_model.LogisticRegression
with L1 loss (use solver='saga'
).
Cross validation is now able to return the results from multiple metric
evaluations. The new model_selection.cross_validate
can return many
scores on the test data as well as training set performance and timings, and we
have extended the scoring
and refit
parameters for grid/randomized
search to handle multiple metrics.
You can also learn faster. For instance, the new option to cache
transformations in pipeline.Pipeline
makes grid
search over pipelines including slow transformations much more efficient. And
you can predict faster: if you’re sure you know what you’re doing, you can turn
off validating that the input is finite using config_context
.
We’ve made some important fixes too. We’ve fixed a longstanding implementation
error in metrics.average_precision_score
, so please be cautious with
prior results reported from that function. A number of errors in the
manifold.TSNE
implementation have been fixed, particularly in the
default Barnes-Hut approximation. semi_supervised.LabelSpreading
and
semi_supervised.LabelPropagation
have had substantial fixes.
LabelPropagation was previously broken. LabelSpreading should now correctly
respect its alpha parameter.
Changed models¶
The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures.
cluster.KMeans
with sparse X and initial centroids given (bug fix)cross_decomposition.PLSRegression
withscale=True
(bug fix)ensemble.GradientBoostingClassifier
andensemble.GradientBoostingRegressor
wheremin_impurity_split
is used (bug fix)- gradient boosting
loss='quantile'
(bug fix) ensemble.IsolationForest
(bug fix)feature_selection.SelectFdr
(bug fix)linear_model.RANSACRegressor
(bug fix)linear_model.LassoLars
(bug fix)linear_model.LassoLarsIC
(bug fix)manifold.TSNE
(bug fix)neighbors.NearestCentroid
(bug fix)semi_supervised.LabelSpreading
(bug fix)semi_supervised.LabelPropagation
(bug fix)- tree based models where
min_weight_fraction_leaf
is used (enhancement)
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we cannot assure that this list is complete.)
Changelog¶
New features¶
Classifiers and regressors
- Added
multioutput.ClassifierChain
for multi-label classification. By Adam Kleczewski. - Added solver
'saga'
that implements the improved version of Stochastic Average Gradient, inlinear_model.LogisticRegression
andlinear_model.Ridge
. It allows the use of L1 penalty with multinomial logistic loss, and behaves marginally better than ‘sag’ during the first epochs of ridge and logistic regression. #8446 by Arthur Mensch.
Other estimators
- Added the
neighbors.LocalOutlierFactor
class for anomaly detection based on nearest neighbors. #5279 by Nicolas Goix and Alexandre Gramfort. - Added
preprocessing.QuantileTransformer
class andpreprocessing.quantile_transform
function for features normalization based on quantiles. #8363 by Denis Engemann, Guillaume Lemaitre, Olivier Grisel, Raghav RV, Thierry Guillemot, and Gael Varoquaux. - The new solver
'mu'
implements a Multiplicate Update indecomposition.NMF
, allowing the optimization of all beta-divergences, including the Frobenius norm, the generalized Kullback-Leibler divergence and the Itakura-Saito divergence. #5295 by Tom Dupre la Tour.
Model selection and evaluation
model_selection.GridSearchCV
andmodel_selection.RandomizedSearchCV
now support simultaneous evaluation of multiple metrics. Refer to the Specifying multiple metrics for evaluation section of the user guide for more information. #7388 by Raghav RV- Added the
model_selection.cross_validate
which allows evaluation of multiple metrics. This function returns a dict with more useful information from cross-validation such as the train scores, fit times and score times. Refer to The cross_validate function and multiple metric evaluation section of the userguide for more information. #7388 by Raghav RV - Added
metrics.mean_squared_log_error
, which computes the mean square error of the logarithmic transformation of targets, particularly useful for targets with an exponential trend. #7655 by Karan Desai. - Added
metrics.dcg_score
andmetrics.ndcg_score
, which compute Discounted cumulative gain (DCG) and Normalized discounted cumulative gain (NDCG). #7739 by David Gasquez. - Added the
model_selection.RepeatedKFold
andmodel_selection.RepeatedStratifiedKFold
. #8120 by Neeraj Gangwar. - Added a scorer based on
metrics.explained_variance_score
. #9259 by Hanmin Qin.
Miscellaneous
- Validation that input data contains no NaN or inf can now be suppressed
using
config_context
, at your own risk. This will save on runtime, and may be particularly useful for prediction time. #7548 by Joel Nothman. - Added a test to ensure parameter listing in docstrings match the function/class signature. #9206 by Alexandre Gramfort and Raghav RV.
Enhancements¶
Trees and ensembles
- The
min_weight_fraction_leaf
constraint in tree construction is now more efficient, taking a fast path to declare a node a leaf if its weight is less than 2 * the minimum. Note that the constructed tree will be different from previous versions wheremin_weight_fraction_leaf
is used. #7441 by Nelson Liu. ensemble.GradientBoostingClassifier
andensemble.GradientBoostingRegressor
now support sparse input for prediction. #6101 by Ibraim Ganiev.ensemble.VotingClassifier
now allows changing estimators by usingensemble.VotingClassifier.set_params
. An estimator can also be removed by setting it toNone
. #7674 by Yichuan Liu.tree.export_graphviz
now shows configurable number of decimal places. #8698 by Guillaume Lemaitre.- Added
flatten_transform
parameter toensemble.VotingClassifier
to change output shape of transform method to 2 dimensional. #7794 by Ibraim Ganiev and Herilalaina Rakotoarison.
Linear, kernelized and related models
linear_model.SGDClassifier
,linear_model.SGDRegressor
,linear_model.PassiveAggressiveClassifier
,linear_model.PassiveAggressiveRegressor
andlinear_model.Perceptron
now exposemax_iter
andtol
parameters, to handle convergence more precisely.n_iter
parameter is deprecated, and the fitted estimator exposes an_iter_
attribute, with actual number of iterations before convergence. #5036 by Tom Dupre la Tour.- Added
average
parameter to perform weight averaging inlinear_model.PassiveAggressiveClassifier
. #4939 by Andrea Esuli. linear_model.RANSACRegressor
no longer throws an error when callingfit
if no inliers are found in its first iteration. Furthermore, causes of skipped iterations are tracked in newly added attributes,n_skips_*
. #7914 by Michael Horrell.- In
gaussian_process.GaussianProcessRegressor
, methodpredict
is a lot faster withreturn_std=True
. #8591 by Hadrien Bertrand. - Added
return_std
topredict
method oflinear_model.ARDRegression
andlinear_model.BayesianRidge
. #7838 by Sergey Feldman. - Memory usage enhancements: Prevent cast from float32 to float64 in:
linear_model.MultiTaskElasticNet
;linear_model.LogisticRegression
when using newton-cg solver; andlinear_model.Ridge
when using svd, sparse_cg, cholesky or lsqr solvers. #8835, #8061 by Joan Massich and Nicolas Cordier and Thierry Guillemot.
Other predictors
- Custom metrics for the
neighbors
binary trees now have fewer constraints: they must take two 1d-arrays and return a float. #6288 by Jake Vanderplas. algorithm='auto
inneighbors
estimators now chooses the most appropriate algorithm for all input types and metrics. #9145 by Herilalaina Rakotoarison and Reddy Chinthala.
Decomposition, manifold learning and clustering
cluster.MiniBatchKMeans
andcluster.KMeans
now use significantly less memory when assigning data points to their nearest cluster center. #7721 by Jon Crall.decomposition.PCA
,decomposition.IncrementalPCA
anddecomposition.TruncatedSVD
now expose the singular values from the underlying SVD. They are stored in the attributesingular_values_
, like indecomposition.IncrementalPCA
. #7685 by Tommy Löfstedt- Fixed the implementation of noise_variance_ in
decomposition.PCA
. #9108 by Hanmin Qin. decomposition.NMF
now faster whenbeta_loss=0
. #9277 by @hongkahjun.- Memory improvements for method
barnes_hut
inmanifold.TSNE
#7089 by Thomas Moreau and Olivier Grisel. - Optimization schedule improvements for Barnes-Hut
manifold.TSNE
so the results are closer to the one from the reference implementation lvdmaaten/bhtsne by Thomas Moreau and Olivier Grisel. - Memory usage enhancements: Prevent cast from float32 to float64 in
decomposition.PCA
anddecomposition.randomized_svd_low_rank
. #9067 by Raghav RV.
Preprocessing and feature selection
- Added
norm_order
parameter tofeature_selection.SelectFromModel
to enable selection of the norm order whencoef_
is more than 1D. #6181 by Antoine Wendlinger. - Added ability to use sparse matrices in
feature_selection.f_regression
withcenter=True
. #8065 by Daniel LeJeune. - Small performance improvement to n-gram creation in
feature_extraction.text
by binding methods for loops and special-casing unigrams. #7567 by Jaye Doepke - Relax assumption on the data for the
kernel_approximation.SkewedChi2Sampler
. Since the Skewed-Chi2 kernel is defined on the open interval , the transform function should not check whetherX < 0
but whetherX < -self.skewedness
. #7573 by Romain Brault. - Made default kernel parameters kernel-dependent in
kernel_approximation.Nystroem
. #5229 by Saurabh Bansod and Andreas Müller.
Model evaluation and meta-estimators
pipeline.Pipeline
is now able to cache transformers within a pipeline by using thememory
constructor parameter. #7990 by Guillaume Lemaitre.pipeline.Pipeline
steps can now be accessed as attributes of itsnamed_steps
attribute. #8586 by Herilalaina Rakotoarison.- Added
sample_weight
parameter topipeline.Pipeline.score
. #7723 by Mikhail Korobov. - Added ability to set
n_jobs
parameter topipeline.make_union
. ATypeError
will be raised for any other kwargs. #8028 by Alexander Booth. model_selection.GridSearchCV
,model_selection.RandomizedSearchCV
andmodel_selection.cross_val_score
now allow estimators with callable kernels which were previously prohibited. #8005 by Andreas Müller .model_selection.cross_val_predict
now returns output of the correct shape for all values of the argumentmethod
. #7863 by Aman Dalmia.- Added
shuffle
andrandom_state
parameters to shuffle training data before taking prefixes of it based on training sizes inmodel_selection.learning_curve
. #7506 by Narine Kokhlikyan. model_selection.StratifiedShuffleSplit
now works with multioutput multiclass (or multilabel) data. #9044 by Vlad Niculae.- Speed improvements to
model_selection.StratifiedShuffleSplit
. #5991 by Arthur Mensch and Joel Nothman. - Add
shuffle
parameter tomodel_selection.train_test_split
. #8845 by themrmax multioutput.MultiOutputRegressor
andmultioutput.MultiOutputClassifier
now support online learning usingpartial_fit
. :issue: 8053 by Peng Yu.- Add
max_train_size
parameter tomodel_selection.TimeSeriesSplit
#8282 by Aman Dalmia. - More clustering metrics are now available through
metrics.get_scorer
andscoring
parameters. #8117 by Raghav RV.
Metrics
metrics.matthews_corrcoef
now support multiclass classification. #8094 by Jon Crall.- Add
sample_weight
parameter tometrics.cohen_kappa_score
. #8335 by Victor Poughon.
Miscellaneous
utils.check_estimator
now attempts to ensure that methods transform, predict, etc. do not set attributes on the estimator. #7533 by Ekaterina Krivich.- Added type checking to the
accept_sparse
parameter inutils.validation
methods. This parameter now accepts only boolean, string, or list/tuple of strings.accept_sparse=None
is deprecated and should be replaced byaccept_sparse=False
. #7880 by Josh Karnofsky. - Make it possible to load a chunk of an svmlight formatted file by
passing a range of bytes to
datasets.load_svmlight_file
. #935 by Olivier Grisel. dummy.DummyClassifier
anddummy.DummyRegressor
now accept non-finite features. #8931 by @Attractadore.
Bug fixes¶
Trees and ensembles
- Fixed a memory leak in trees when using trees with
criterion='mae'
. #8002 by Raghav RV. - Fixed a bug where
ensemble.IsolationForest
uses an an incorrect formula for the average path length #8549 by Peter Wang. - Fixed a bug where
ensemble.AdaBoostClassifier
throwsZeroDivisionError
while fitting data with single class labels. #7501 by Dominik Krzeminski. - Fixed a bug in
ensemble.GradientBoostingClassifier
andensemble.GradientBoostingRegressor
where a float being compared to0.0
using==
caused a divide by zero error. #7970 by He Chen. - Fix a bug where
ensemble.GradientBoostingClassifier
andensemble.GradientBoostingRegressor
ignored themin_impurity_split
parameter. #8006 by Sebastian Pölsterl. - Fixed
oob_score
inensemble.BaggingClassifier
. #8936 by Michael Lewis - Fixed excessive memory usage in prediction for random forests estimators. #8672 by Mike Benfield.
- Fixed a bug where
sample_weight
as a list broke random forests in Python 2 #8068 by @xor. - Fixed a bug where
ensemble.IsolationForest
fails whenmax_features
is less than 1. #5732 by Ishank Gulati. - Fix a bug where gradient boosting with
loss='quantile'
computed negative errors for negative values ofytrue - ypred
leading to wrong values when calling__call__
. #8087 by Alexis Mignon - Fix a bug where
ensemble.VotingClassifier
raises an error when a numpy array is passed in for weights. #7983 by Vincent Pham. - Fixed a bug where
tree.export_graphviz
raised an error when the length of features_names does not match n_features in the decision tree. #8512 by Li Li.
Linear, kernelized and related models
- Fixed a bug where
linear_model.RANSACRegressor.fit
may run untilmax_iter
if it finds a large inlier group early. #8251 by @aivision2020. - Fixed a bug where
naive_bayes.MultinomialNB
andnaive_bayes.BernoulliNB
failed whenalpha=0
. #5814 by Yichuan Liu and Herilalaina Rakotoarison. - Fixed a bug where
linear_model.LassoLars
does not give the same result as the LassoLars implementation available in R (lars library). #7849 by Jair Montoya Martinez. - Fixed a bug in
linear_model.RandomizedLasso
,linear_model.Lars
,linear_model.LassoLars
,linear_model.LarsCV
andlinear_model.LassoLarsCV
, where the parameterprecompute
was not used consistently across classes, and some values proposed in the docstring could raise errors. #5359 by Tom Dupre la Tour. - Fix inconsistent results between
linear_model.RidgeCV
andlinear_model.Ridge
when usingnormalize=True
. #9302 by Alexandre Gramfort. - Fix a bug where
linear_model.LassoLars.fit
sometimes leftcoef_
as a list, rather than an ndarray. #8160 by CJ Carey. - Fix
linear_model.BayesianRidge.fit
to return ridge parameteralpha_
andlambda_
consistent with calculated coefficientscoef_
andintercept_
. #8224 by Peter Gedeck. - Fixed a bug in
svm.OneClassSVM
where it returned floats instead of integer classes. #8676 by Vathsala Achar. - Fix AIC/BIC criterion computation in
linear_model.LassoLarsIC
. #9022 by Alexandre Gramfort and Mehmet Basbug. - Fixed a memory leak in our LibLinear implementation. #9024 by Sergei Lebedev
- Fix bug where stratified CV splitters did not work with
linear_model.LassoCV
. #8973 by Paulo Haddad. - Fixed a bug in
gaussian_process.GaussianProcessRegressor
when the standard deviation and covariance predicted without fit would fail with a unmeaningful error by default. #6573 by Quazi Marufur Rahman and Manoj Kumar.
Other predictors
- Fix
semi_supervised.BaseLabelPropagation
to correctly implementLabelPropagation
andLabelSpreading
as done in the referenced papers. #9239 by Andre Ambrosio Boechat, Utkarsh Upadhyay, and Joel Nothman.
Decomposition, manifold learning and clustering
Fixed the implementation of
manifold.TSNE
:early_exageration
parameter had no effect and is now used for the first 250 optimization iterations.Fixed the
AssertionError: Tree consistency failed
exception reported in #8992.Improve the learning schedule to match the one from the reference implementation lvdmaaten/bhtsne.
by Thomas Moreau and Olivier Grisel.
Fix a bug in
decomposition.LatentDirichletAllocation
where theperplexity
method was returning incorrect results because thetransform
method returns normalized document topic distributions as of version 0.18. #7954 by Gary Foreman.Fix output shape and bugs with n_jobs > 1 in
decomposition.SparseCoder
transform anddecomposition.sparse_encode
for one-dimensional data and one component. This also impacts the output shape ofdecomposition.DictionaryLearning
. #8086 by Andreas Müller.Fixed the implementation of
explained_variance_
indecomposition.PCA
,decomposition.RandomizedPCA
anddecomposition.IncrementalPCA
. #9105 by Hanmin Qin.Fixed the implementation of noise_variance_ in
decomposition.PCA
. #9108 by Hanmin Qin.Fixed a bug where
cluster.DBSCAN
gives incorrect result when input is a precomputed sparse matrix with initial rows all zero. #8306 by Akshay GuptaFix a bug regarding fitting
cluster.KMeans
with a sparse array X and initial centroids, where X’s means were unnecessarily being subtracted from the centroids. #7872 by Josh Karnofsky.Fixes to the input validation in
covariance.EllipticEnvelope
. #8086 by Andreas Müller.Fixed a bug in
covariance.MinCovDet
where inputting data that produced a singular covariance matrix would cause the helper method_c_step
to throw an exception. #3367 by Jeremy StewardFixed a bug in
manifold.TSNE
affecting convergence of the gradient descent. #8768 by David DeTomaso.Fixed a bug in
manifold.TSNE
where it stored the incorrectkl_divergence_
. #6507 by Sebastian Saeger.Fixed improper scaling in
cross_decomposition.PLSRegression
withscale=True
. #7819 by jayzed82.cluster.bicluster.SpectralCoclustering
andcluster.bicluster.SpectralBiclustering
fit
method conforms with API by acceptingy
and returning the object. #6126, #7814 by Laurent Direr and Maniteja Nandana.Fix bug where
mixture
sample
methods did not return as many samples as requested. #7702 by Levi John Wolf.Fixed the shrinkage implementation in
neighbors.NearestCentroid
. #9219 by Hanmin Qin.
Preprocessing and feature selection
- For sparse matrices,
preprocessing.normalize
withreturn_norm=True
will now raise aNotImplementedError
with ‘l1’ or ‘l2’ norm and with norm ‘max’ the norms returned will be the same as for dense matrices. #7771 by Ang Lu. - Fix a bug where
feature_selection.SelectFdr
did not exactly implement Benjamini-Hochberg procedure. It formerly may have selected fewer features than it should. #7490 by Peng Meng. - Fixed a bug where
linear_model.RandomizedLasso
andlinear_model.RandomizedLogisticRegression
breaks for sparse input. #8259 by Aman Dalmia. - Fix a bug where
feature_extraction.FeatureHasher
mandatorily applied a sparse random projection to the hashed features, preventing the use offeature_extraction.text.HashingVectorizer
in a pipeline withfeature_extraction.text.TfidfTransformer
. #7565 by Roman Yurchak. - Fix a bug where
feature_selection.mutual_info_regression
did not correctly usen_neighbors
. #8181 by Guillaume Lemaitre.
Model evaluation and meta-estimators
- Fixed a bug where
model_selection.BaseSearchCV.inverse_transform
returnsself.best_estimator_.transform()
instead ofself.best_estimator_.inverse_transform()
. #8344 by Akshay Gupta and Rasmus Eriksson. - Added
classes_
attribute tomodel_selection.GridSearchCV
,model_selection.RandomizedSearchCV
,grid_search.GridSearchCV
, andgrid_search.RandomizedSearchCV
that matches theclasses_
attribute ofbest_estimator_
. #7661 and #8295 by Alyssa Batula, Dylan Werner-Meier, and Stephen Hoover. - Fixed a bug where
model_selection.validation_curve
reused the same estimator for each parameter value. #7365 by Aleksandr Sandrovskii. model_selection.permutation_test_score
now works with Pandas types. #5697 by Stijn Tonk.- Several fixes to input validation in
multiclass.OutputCodeClassifier
#8086 by Andreas Müller. multiclass.OneVsOneClassifier
’spartial_fit
now ensures all classes are provided up-front. #6250 by Asish Panda.- Fix
multioutput.MultiOutputClassifier.predict_proba
to return a list of 2d arrays, rather than a 3d array. In the case where different target columns had different numbers of classes, aValueError
would be raised on trying to stack matrices with different dimensions. #8093 by Peter Bull. - Cross validation now works with Pandas datatypes that that have a read-only index. #9507 by Loic Esteve.
Metrics
metrics.average_precision_score
no longer linearly interpolates between operating points, and instead weighs precisions by the change in recall since the last operating point, as per the Wikipedia entry. (#7356). By Nick Dingwall and Gael Varoquaux.- Fix a bug in
metrics.classification._check_targets
which would return'binary'
ify_true
andy_pred
were both'binary'
but the union ofy_true
andy_pred
was'multiclass'
. #8377 by Loic Esteve. - Fixed an integer overflow bug in
metrics.confusion_matrix
and hencemetrics.cohen_kappa_score
. #8354, #7929 by Joel Nothman and Jon Crall. - Fixed passing of
gamma
parameter to thechi2
kernel inmetrics.pairwise.pairwise_kernels
#5211 by Nick Rhinehart, Saurabh Bansod and Andreas Müller.
Miscellaneous
- Fixed a bug when
datasets.make_classification
fails when generating more than 30 features. #8159 by Herilalaina Rakotoarison. - Fixed a bug where
datasets.make_moons
gives an incorrect result whenn_samples
is odd. #8198 by Josh Levy. - Some
fetch_
functions indatasets
were ignoring thedownload_if_missing
keyword. #7944 by Ralf Gommers. - Fix estimators to accept a
sample_weight
parameter of typepandas.Series
in theirfit
function. #7825 by Kathleen Chen. - Fix a bug in cases where
numpy.cumsum
may be numerically unstable, raising an exception if instability is identified. #7376 and #7331 by Joel Nothman and @yangarbiter. - Fix a bug where
base.BaseEstimator.__getstate__
obstructed pickling customizations of child-classes, when used in a multiple inheritance context. #8316 by Holger Peters. - Update Sphinx-Gallery from 0.1.4 to 0.1.7 for resolving links in documentation build with Sphinx>1.5 #8010, #7986 by Oscar Najera
- Add
data_home
parameter tosklearn.datasets.fetch_kddcup99
. #9289 by Loic Esteve. - Fix dataset loaders using Python 3 version of makedirs to also work in Python 2. #9284 by Sebastin Santy.
- Several minor issues were fixed with thanks to the alerts of [lgtm.com](http://lgtm.com). #9278 by Jean Helie, among others.
API changes summary¶
Trees and ensembles
- Gradient boosting base models are no longer estimators. By Andreas Müller.
- All tree based estimators now accept a
min_impurity_decrease
parameter in lieu of themin_impurity_split
, which is now deprecated. Themin_impurity_decrease
helps stop splitting the nodes in which the weighted impurity decrease from splitting is no longer alteastmin_impurity_decrease
. #8449 by Raghav RV.
Linear, kernelized and related models
n_iter
parameter is deprecated inlinear_model.SGDClassifier
,linear_model.SGDRegressor
,linear_model.PassiveAggressiveClassifier
,linear_model.PassiveAggressiveRegressor
andlinear_model.Perceptron
. By Tom Dupre la Tour.
Other predictors
neighbors.LSHForest
has been deprecated and will be removed in 0.21 due to poor performance. #9078 by Laurent Direr.neighbors.NearestCentroid
no longer purports to supportmetric='precomputed'
which now raises an error. #8515 by Sergul Aydore.- The
alpha
parameter ofsemi_supervised.LabelPropagation
now has no effect and is deprecated to be removed in 0.21. #9239 by Andre Ambrosio Boechat, Utkarsh Upadhyay, and Joel Nothman.
Decomposition, manifold learning and clustering
- Deprecate the
doc_topic_distr
argument of theperplexity
method indecomposition.LatentDirichletAllocation
because the user no longer has access to the unnormalized document topic distribution needed for the perplexity calculation. #7954 by Gary Foreman. - The
n_topics
parameter ofdecomposition.LatentDirichletAllocation
has been renamed ton_components
and will be removed in version 0.21. #8922 by @Attractadore. decomposition.SparsePCA.transform
’sridge_alpha
parameter is deprecated in preference for class parameter. #8137 by Naoya Kanai.cluster.DBSCAN
now has ametric_params
parameter. #8139 by Naoya Kanai.
Preprocessing and feature selection
feature_selection.SelectFromModel
now has apartial_fit
method only if the underlying estimator does. By Andreas Müller.feature_selection.SelectFromModel
now validates thethreshold
parameter and sets thethreshold_
attribute during the call tofit
, and no longer during the call totransform`
. By Andreas Müller.- The
non_negative
parameter infeature_extraction.FeatureHasher
has been deprecated, and replaced with a more principled alternative,alternate_sign
. #7565 by Roman Yurchak. linear_model.RandomizedLogisticRegression
, andlinear_model.RandomizedLasso
have been deprecated and will be removed in version 0.21. #8995 by Ramana.S.
Model evaluation and meta-estimators
- Deprecate the
fit_params
constructor input to themodel_selection.GridSearchCV
andmodel_selection.RandomizedSearchCV
in favor of passing keyword parameters to thefit
methods of those classes. Data-dependent parameters needed for model training should be passed as keyword arguments tofit
, and conforming to this convention will allow the hyperparameter selection classes to be used with tools such asmodel_selection.cross_val_predict
. #2879 by Stephen Hoover. - In version 0.21, the default behavior of splitters that use the
test_size
andtrain_size
parameter will change, such that specifyingtrain_size
alone will causetest_size
to be the remainder. #7459 by Nelson Liu. multiclass.OneVsRestClassifier
now haspartial_fit
,decision_function
andpredict_proba
methods only when the underlying estimator does. #7812 by Andreas Müller and Mikhail Korobov.multiclass.OneVsRestClassifier
now has apartial_fit
method only if the underlying estimator does. By Andreas Müller.- The
decision_function
output shape for binary classification inmulticlass.OneVsRestClassifier
andmulticlass.OneVsOneClassifier
is now(n_samples,)
to conform to scikit-learn conventions. #9100 by Andreas Müller. - The
multioutput.MultiOutputClassifier.predict_proba
function used to return a 3d array (n_samples
,n_classes
,n_outputs
). In the case where different target columns had different numbers of classes, aValueError
would be raised on trying to stack matrices with different dimensions. This function now returns a list of arrays where the length of the list isn_outputs
, and each array is (n_samples
,n_classes
) for that particular output. #8093 by Peter Bull. - Replace attribute
named_steps
dict
toutils.Bunch
inpipeline.Pipeline
to enable tab completion in interactive environment. In the case conflict value onnamed_steps
anddict
attribute,dict
behavior will be prioritized. #8481 by Herilalaina Rakotoarison.
Miscellaneous
Deprecate the
y
parameter intransform
andinverse_transform
. The method should not accepty
parameter, as it’s used at the prediction time. #8174 by Tahar Zanouda, Alexandre Gramfort and Raghav RV.SciPy >= 0.13.3 and NumPy >= 1.8.2 are now the minimum supported versions for scikit-learn. The following backported functions in
utils
have been removed or deprecated accordingly. #8854 and #8874 by Naoya KanaiThe
store_covariances
andcovariances_
parameters ofdiscriminant_analysis.QuadraticDiscriminantAnalysis
has been renamed tostore_covariance
andcovariance_
to be consistent with the corresponding parameter names of thediscriminant_analysis.LinearDiscriminantAnalysis
. They will be removed in version 0.21. #7998 by JiachengRemoved in 0.19:
utils.fixes.argpartition
utils.fixes.array_equal
utils.fixes.astype
utils.fixes.bincount
utils.fixes.expit
utils.fixes.frombuffer_empty
utils.fixes.in1d
utils.fixes.norm
utils.fixes.rankdata
utils.fixes.safe_copy
Deprecated in 0.19, to be removed in 0.21:
utils.arpack.eigs
utils.arpack.eigsh
utils.arpack.svds
utils.extmath.fast_dot
utils.extmath.logsumexp
utils.extmath.norm
utils.extmath.pinvh
utils.graph.graph_laplacian
utils.random.choice
utils.sparsetools.connected_components
utils.stats.rankdata
Estimators with both methods
decision_function
andpredict_proba
are now required to have a monotonic relation between them. The methodcheck_decision_proba_consistency
has been added in utils.estimator_checks to check their consistency. #7578 by Shubham BhardwajAll checks in
utils.estimator_checks
, in particularutils.estimator_checks.check_estimator
now accept estimator instances. Most other checks do not accept estimator classes any more. #9019 by Andreas Müller.Ensure that estimators’ attributes ending with
_
are not set in the constructor but only in thefit
method. Most notably, ensemble estimators (deriving fromensemble.BaseEnsemble
) now only haveself.estimators_
available afterfit
. #7464 by Lars Buitinck and Loic Esteve.
Code and Documentation Contributors¶
Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0.18, including:
Joel Nothman, Loic Esteve, Andreas Mueller, Guillaume Lemaitre, Olivier Grisel, Hanmin Qin, Raghav RV, Alexandre Gramfort, themrmax, Aman Dalmia, Gael Varoquaux, Naoya Kanai, Tom Dupré la Tour, Rishikesh, Nelson Liu, Taehoon Lee, Nelle Varoquaux, Aashil, Mikhail Korobov, Sebastin Santy, Joan Massich, Roman Yurchak, RAKOTOARISON Herilalaina, Thierry Guillemot, Alexandre Abadie, Carol Willing, Balakumaran Manoharan, Josh Karnofsky, Vlad Niculae, Utkarsh Upadhyay, Dmitry Petrov, Minghui Liu, Srivatsan, Vincent Pham, Albert Thomas, Jake VanderPlas, Attractadore, JC Liu, alexandercbooth, chkoar, Óscar Nájera, Aarshay Jain, Kyle Gilliam, Ramana Subramanyam, CJ Carey, Clement Joudet, David Robles, He Chen, Joris Van den Bossche, Karan Desai, Katie Luangkote, Leland McInnes, Maniteja Nandana, Michele Lacchia, Sergei Lebedev, Shubham Bhardwaj, akshay0724, omtcyfz, rickiepark, waterponey, Vathsala Achar, jbDelafosse, Ralf Gommers, Ekaterina Krivich, Vivek Kumar, Ishank Gulati, Dave Elliott, ldirer, Reiichiro Nakano, Levi John Wolf, Mathieu Blondel, Sid Kapur, Dougal J. Sutherland, midinas, mikebenfield, Sourav Singh, Aseem Bansal, Ibraim Ganiev, Stephen Hoover, AishwaryaRK, Steven C. Howell, Gary Foreman, Neeraj Gangwar, Tahar, Jon Crall, dokato, Kathy Chen, ferria, Thomas Moreau, Charlie Brummitt, Nicolas Goix, Adam Kleczewski, Sam Shleifer, Nikita Singh, Basil Beirouti, Giorgio Patrini, Manoj Kumar, Rafael Possas, James Bourbeau, James A. Bednar, Janine Harper, Jaye, Jean Helie, Jeremy Steward, Artsiom, John Wei, Jonathan LIgo, Jonathan Rahn, seanpwilliams, Arthur Mensch, Josh Levy, Julian Kuhlmann, Julien Aubert, Jörn Hees, Kai, shivamgargsya, Kat Hempstalk, Kaushik Lakshmikanth, Kennedy, Kenneth Lyons, Kenneth Myers, Kevin Yap, Kirill Bobyrev, Konstantin Podshumok, Arthur Imbert, Lee Murray, toastedcornflakes, Lera, Li Li, Arthur Douillard, Mainak Jas, tobycheese, Manraj Singh, Manvendra Singh, Marc Meketon, MarcoFalke, Matthew Brett, Matthias Gilch, Mehul Ahuja, Melanie Goetz, Meng, Peng, Michael Dezube, Michal Baumgartner, vibrantabhi19, Artem Golubin, Milen Paskov, Antonin Carette, Morikko, MrMjauh, NALEPA Emmanuel, Namiya, Antoine Wendlinger, Narine Kokhlikyan, NarineK, Nate Guerin, Angus Williams, Ang Lu, Nicole Vavrova, Nitish Pandey, Okhlopkov Daniil Olegovich, Andy Craze, Om Prakash, Parminder Singh, Patrick Carlson, Patrick Pei, Paul Ganssle, Paulo Haddad, Paweł Lorek, Peng Yu, Pete Bachant, Peter Bull, Peter Csizsek, Peter Wang, Pieter Arthur de Jong, Ping-Yao, Chang, Preston Parry, Puneet Mathur, Quentin Hibon, Andrew Smith, Andrew Jackson, 1kastner, Rameshwar Bhaskaran, Rebecca Bilbro, Remi Rampin, Andrea Esuli, Rob Hall, Robert Bradshaw, Romain Brault, Aman Pratik, Ruifeng Zheng, Russell Smith, Sachin Agarwal, Sailesh Choyal, Samson Tan, Samuël Weber, Sarah Brown, Sebastian Pölsterl, Sebastian Raschka, Sebastian Saeger, Alyssa Batula, Abhyuday Pratap Singh, Sergey Feldman, Sergul Aydore, Sharan Yalburgi, willduan, Siddharth Gupta, Sri Krishna, Almer, Stijn Tonk, Allen Riddell, Theofilos Papapanagiotou, Alison, Alexis Mignon, Tommy Boucher, Tommy Löfstedt, Toshihiro Kamishima, Tyler Folkman, Tyler Lanigan, Alexander Junge, Varun Shenoy, Victor Poughon, Vilhelm von Ehrenheim, Aleksandr Sandrovskii, Alan Yee, Vlasios Vasileiou, Warut Vijitbenjaronk, Yang Zhang, Yaroslav Halchenko, Yichuan Liu, Yuichi Fujikawa, affanv14, aivision2020, xor, andreh7, brady salz, campustrampus, Agamemnon Krasoulis, ditenberg, elena-sharova, filipj8, fukatani, gedeck, guiniol, guoci, hakaa1, hongkahjun, i-am-xhy, jakirkham, jaroslaw-weber, jayzed82, jeroko, jmontoyam, jonathan.striebel, josephsalmon, jschendel, leereeves, martin-hahn, mathurinm, mehak-sachdeva, mlewis1729, mlliou112, mthorrell, ndingwall, nuffe, yangarbiter, plagree, pldtc325, Breno Freitas, Brett Olsen, Brian A. Alfano, Brian Burns, polmauri, Brandon Carter, Charlton Austin, Chayant T15h, Chinmaya Pancholi, Christian Danielsen, Chung Yen, Chyi-Kwei Yau, pravarmahajan, DOHMATOB Elvis, Daniel LeJeune, Daniel Hnyk, Darius Morawiec, David DeTomaso, David Gasquez, David Haberthür, David Heryanto, David Kirkby, David Nicholson, rashchedrin, Deborah Gertrude Digges, Denis Engemann, Devansh D, Dickson, Bob Baxley, Don86, E. Lynch-Klarup, Ed Rogers, Elizabeth Ferriss, Ellen-Co2, Fabian Egli, Fang-Chieh Chou, Bing Tian Dai, Greg Stupp, Grzegorz Szpak, Bertrand Thirion, Hadrien Bertrand, Harizo Rajaona, zxcvbnius, Henry Lin, Holger Peters, Icyblade Dai, Igor Andriushchenko, Ilya, Isaac Laughlin, Iván Vallés, Aurélien Bellet, JPFrancoia, Jacob Schreiber, Asish Mahapatra
Version 0.18.2¶
June 20, 2017
Last release with Python 2.6 support
Scikit-learn 0.18 is the last major release of scikit-learn to support Python 2.6. Later versions of scikit-learn will require Python 2.7 or above.
Changelog¶
Code Contributors¶
Aman Dalmia, Loic Esteve, Nate Guerin, Sergei Lebedev
Version 0.18.1¶
November 11, 2016
Changelog¶
Enhancements¶
Improved
sample_without_replacement
speed by utilizing numpy.random.permutation for most cases. As a result, samples may differ in this release for a fixed random state. Affected estimators:ensemble.BaggingClassifier
ensemble.BaggingRegressor
linear_model.RANSACRegressor
model_selection.RandomizedSearchCV
random_projection.SparseRandomProjection
This also affects the
datasets.make_classification
method.
Bug fixes¶
- Fix issue where
min_grad_norm
andn_iter_without_progress
parameters were not being utilised bymanifold.TSNE
. #6497 by Sebastian Säger - Fix bug for svm’s decision values when
decision_function_shape
isovr
insvm.SVC
.svm.SVC
’s decision_function was incorrect from versions 0.17.0 through 0.18.0. #7724 by Bing Tian Dai - Attribute
explained_variance_ratio
ofdiscriminant_analysis.LinearDiscriminantAnalysis
calculated with SVD and Eigen solver are now of the same length. #7632 by JPFrancoia - Fixes issue in Univariate feature selection where score functions were not accepting multi-label targets. #7676 by Mohammed Affan
- Fixed setting parameters when calling
fit
multiple times onfeature_selection.SelectFromModel
. #7756 by Andreas Müller - Fixes issue in
partial_fit
method ofmulticlass.OneVsRestClassifier
when number of classes used inpartial_fit
was less than the total number of classes in the data. #7786 by Srivatsan Ramesh - Fixes issue in
calibration.CalibratedClassifierCV
where the sum of probabilities of each class for a data was not 1, andCalibratedClassifierCV
now handles the case where the training set has less number of classes than the total data. #7799 by Srivatsan Ramesh - Fix a bug where
sklearn.feature_selection.SelectFdr
did not exactly implement Benjamini-Hochberg procedure. It formerly may have selected fewer features than it should. #7490 by Peng Meng. sklearn.manifold.LocallyLinearEmbedding
now correctly handles integer inputs. #6282 by Jake Vanderplas.- The
min_weight_fraction_leaf
parameter of tree-based classifiers and regressors now assumes uniform sample weights by default if thesample_weight
argument is not passed to thefit
function. Previously, the parameter was silently ignored. #7301 by Nelson Liu. - Numerical issue with
linear_model.RidgeCV
on centered data when n_features > n_samples. #6178 by Bertrand Thirion - Tree splitting criterion classes’ cloning/pickling is now memory safe #7680 by Ibraim Ganiev.
- Fixed a bug where
decomposition.NMF
sets itsn_iters_
attribute in transform(). #7553 by Ekaterina Krivich. sklearn.linear_model.LogisticRegressionCV
now correctly handles string labels. #5874 by Raghav RV.- Fixed a bug where
sklearn.model_selection.train_test_split
raised an error whenstratify
is a list of string labels. #7593 by Raghav RV. - Fixed a bug where
sklearn.model_selection.GridSearchCV
andsklearn.model_selection.RandomizedSearchCV
were not pickleable because of a pickling bug innp.ma.MaskedArray
. #7594 by Raghav RV. - All cross-validation utilities in
sklearn.model_selection
now permit one time cross-validation splitters for thecv
parameter. Also non-deterministic cross-validation splitters (where multiple calls tosplit
produce dissimilar splits) can be used ascv
parameter. Thesklearn.model_selection.GridSearchCV
will cross-validate each parameter setting on the split produced by the firstsplit
call to the cross-validation splitter. #7660 by Raghav RV. - Fix bug where
preprocessing.MultiLabelBinarizer.fit_transform
returned an invalid CSR matrix. #7750 by CJ Carey. - Fixed a bug where
metrics.pairwise.cosine_distances
could return a small negative distance. #7732 by Artsion.
API changes summary¶
Trees and forests
- The
min_weight_fraction_leaf
parameter of tree-based classifiers and regressors now assumes uniform sample weights by default if thesample_weight
argument is not passed to thefit
function. Previously, the parameter was silently ignored. #7301 by Nelson Liu. - Tree splitting criterion classes’ cloning/pickling is now memory safe. #7680 by Ibraim Ganiev.
Linear, kernelized and related models
- Length of
explained_variance_ratio
ofdiscriminant_analysis.LinearDiscriminantAnalysis
changed for both Eigen and SVD solvers. The attribute has now a length of min(n_components, n_classes - 1). #7632 by JPFrancoia - Numerical issue with
linear_model.RidgeCV
on centered data whenn_features > n_samples
. #6178 by Bertrand Thirion
Version 0.18¶
September 28, 2016
Last release with Python 2.6 support
Scikit-learn 0.18 will be the last version of scikit-learn to support Python 2.6. Later versions of scikit-learn will require Python 2.7 or above.
Model Selection Enhancements and API Changes¶
The model_selection module
The new module
sklearn.model_selection
, which groups together the functionalities of formerlysklearn.cross_validation
,sklearn.grid_search
andsklearn.learning_curve
, introduces new possibilities such as nested cross-validation and better manipulation of parameter searches with Pandas.Many things will stay the same but there are some key differences. Read below to know more about the changes.
Data-independent CV splitters enabling nested cross-validation
The new cross-validation splitters, defined in the
sklearn.model_selection
, are no longer initialized with any data-dependent parameters such asy
. Instead they expose asplit
method that takes in the data and yields a generator for the different splits.This change makes it possible to use the cross-validation splitters to perform nested cross-validation, facilitated by
model_selection.GridSearchCV
andmodel_selection.RandomizedSearchCV
utilities.The enhanced cv_results_ attribute
The new
cv_results_
attribute (ofmodel_selection.GridSearchCV
andmodel_selection.RandomizedSearchCV
) introduced in lieu of thegrid_scores_
attribute is a dict of 1D arrays with elements in each array corresponding to the parameter settings (i.e. search candidates).The
cv_results_
dict can be easily imported intopandas
as aDataFrame
for exploring the search results.The
cv_results_
arrays include scores for each cross-validation split (with keys such as'split0_test_score'
), as well as their mean ('mean_test_score'
) and standard deviation ('std_test_score'
).The ranks for the search candidates (based on their mean cross-validation score) is available at
cv_results_['rank_test_score']
.The parameter values for each parameter is stored separately as numpy masked object arrays. The value, for that search candidate, is masked if the corresponding parameter is not applicable. Additionally a list of all the parameter dicts are stored at
cv_results_['params']
.Parameters n_folds and n_iter renamed to n_splits
Some parameter names have changed: The
n_folds
parameter in newmodel_selection.KFold
,model_selection.GroupKFold
(see below for the name change), andmodel_selection.StratifiedKFold
is now renamed ton_splits
. Then_iter
parameter inmodel_selection.ShuffleSplit
, the new classmodel_selection.GroupShuffleSplit
andmodel_selection.StratifiedShuffleSplit
is now renamed ton_splits
.Rename of splitter classes which accepts group labels along with data
The cross-validation splitters
LabelKFold
,LabelShuffleSplit
,LeaveOneLabelOut
andLeavePLabelOut
have been renamed tomodel_selection.GroupKFold
,model_selection.GroupShuffleSplit
,model_selection.LeaveOneGroupOut
andmodel_selection.LeavePGroupsOut
respectively.Note the change from singular to plural form in
model_selection.LeavePGroupsOut
.Fit parameter labels renamed to groups
The
labels
parameter in thesplit
method of the newly renamed splittersmodel_selection.GroupKFold
,model_selection.LeaveOneGroupOut
,model_selection.LeavePGroupsOut
,model_selection.GroupShuffleSplit
is renamed togroups
following the new nomenclature of their class names.Parameter n_labels renamed to n_groups
The parameter
n_labels
in the newly renamedmodel_selection.LeavePGroupsOut
is changed ton_groups
.Training scores and Timing information
cv_results_
also includes the training scores for each cross-validation split (with keys such as'split0_train_score'
), as well as their mean ('mean_train_score'
) and standard deviation ('std_train_score'
). To avoid the cost of evaluating training score, setreturn_train_score=False
.Additionally the mean and standard deviation of the times taken to split, train and score the model across all the cross-validation splits is available at the key
'mean_time'
and'std_time'
respectively.
Changelog¶
New features¶
Classifiers and Regressors
- The Gaussian Process module has been reimplemented and now offers classification
and regression estimators through
gaussian_process.GaussianProcessClassifier
andgaussian_process.GaussianProcessRegressor
. Among other things, the new implementation supports kernel engineering, gradient-based hyperparameter optimization or sampling of functions from GP prior and GP posterior. Extensive documentation and examples are provided. By Jan Hendrik Metzen. - Added new supervised learning algorithm: Multi-layer Perceptron #3204 by Issam H. Laradji
- Added
linear_model.HuberRegressor
, a linear model robust to outliers. #5291 by Manoj Kumar. - Added the
multioutput.MultiOutputRegressor
meta-estimator. It converts single output regressors to multi-output regressors by fitting one regressor per output. By Tim Head.
Other estimators
- New
mixture.GaussianMixture
andmixture.BayesianGaussianMixture
replace former mixture models, employing faster inference for sounder results. #7295 by Wei Xue and Thierry Guillemot. - Class
decomposition.RandomizedPCA
is now factored intodecomposition.PCA
and it is available calling with parametersvd_solver='randomized'
. The default number ofn_iter
for'randomized'
has changed to 4. The old behavior of PCA is recovered bysvd_solver='full'
. An additional solver callsarpack
and performs truncated (non-randomized) SVD. By default, the best solver is selected depending on the size of the input and the number of components requested. #5299 by Giorgio Patrini. - Added two functions for mutual information estimation:
feature_selection.mutual_info_classif
andfeature_selection.mutual_info_regression
. These functions can be used infeature_selection.SelectKBest
andfeature_selection.SelectPercentile
as score functions. By Andrea Bravi and Nikolay Mayorov. - Added the
ensemble.IsolationForest
class for anomaly detection based on random forests. By Nicolas Goix. - Added
algorithm="elkan"
tocluster.KMeans
implementing Elkan’s fast K-Means algorithm. By Andreas Müller.
Model selection and evaluation
- Added
metrics.cluster.fowlkes_mallows_score
, the Fowlkes Mallows Index which measures the similarity of two clusterings of a set of points By Arnaud Fouchet and Thierry Guillemot. - Added
metrics.calinski_harabaz_score
, which computes the Calinski and Harabaz score to evaluate the resulting clustering of a set of points. By Arnaud Fouchet and Thierry Guillemot. - Added new cross-validation splitter
model_selection.TimeSeriesSplit
to handle time series data. #6586 by YenChen Lin - The cross-validation iterators are replaced by cross-validation splitters
available from
sklearn.model_selection
, allowing for nested cross-validation. See Model Selection Enhancements and API Changes for more information. #4294 by Raghav RV.
Enhancements¶
Trees and ensembles
- Added a new splitting criterion for
tree.DecisionTreeRegressor
, the mean absolute error. This criterion can also be used inensemble.ExtraTreesRegressor
,ensemble.RandomForestRegressor
, and the gradient boosting estimators. #6667 by Nelson Liu. - Added weighted impurity-based early stopping criterion for decision tree growth. #6954 by Nelson Liu
- The random forest, extra tree and decision tree estimators now has a
method
decision_path
which returns the decision path of samples in the tree. By Arnaud Joly. - A new example has been added unveiling the decision tree structure. By Arnaud Joly.
- Random forest, extra trees, decision trees and gradient boosting estimator
accept the parameter
min_samples_split
andmin_samples_leaf
provided as a percentage of the training samples. By yelite and Arnaud Joly. - Gradient boosting estimators accept the parameter
criterion
to specify to splitting criterion used in built decision trees. #6667 by Nelson Liu. - The memory footprint is reduced (sometimes greatly) for
ensemble.bagging.BaseBagging
and classes that inherit from it, i.e,ensemble.BaggingClassifier
,ensemble.BaggingRegressor
, andensemble.IsolationForest
, by dynamically generating attributeestimators_samples_
only when it is needed. By David Staub. - Added
n_jobs
andsample_weight
parameters forensemble.VotingClassifier
to fit underlying estimators in parallel. #5805 by Ibraim Ganiev.
Linear, kernelized and related models
- In
linear_model.LogisticRegression
, the SAG solver is now available in the multinomial case. #5251 by Tom Dupre la Tour. linear_model.RANSACRegressor
,svm.LinearSVC
andsvm.LinearSVR
now supportsample_weight
. By Imaculate.- Add parameter
loss
tolinear_model.RANSACRegressor
to measure the error on the samples for every trial. By Manoj Kumar. - Prediction of out-of-sample events with Isotonic Regression
(
isotonic.IsotonicRegression
) is now much faster (over 1000x in tests with synthetic data). By Jonathan Arfa. - Isotonic regression (
isotonic.IsotonicRegression
) now uses a better algorithm to avoid O(n^2) behavior in pathological cases, and is also generally faster (##6691). By Antony Lee. naive_bayes.GaussianNB
now accepts data-independent class-priors through the parameterpriors
. By Guillaume Lemaitre.linear_model.ElasticNet
andlinear_model.Lasso
now works withnp.float32
input data without converting it intonp.float64
. This allows to reduce the memory consumption. #6913 by YenChen Lin.semi_supervised.LabelPropagation
andsemi_supervised.LabelSpreading
now accept arbitrary kernel functions in addition to stringsknn
andrbf
. #5762 by Utkarsh Upadhyay.
Decomposition, manifold learning and clustering
- Added
inverse_transform
function todecomposition.NMF
to compute data matrix of original shape. By Anish Shah. cluster.KMeans
andcluster.MiniBatchKMeans
now works withnp.float32
andnp.float64
input data without converting it. This allows to reduce the memory consumption by usingnp.float32
. #6846 by Sebastian Säger and YenChen Lin.
Preprocessing and feature selection
preprocessing.RobustScaler
now acceptsquantile_range
parameter. #5929 by Konstantin Podshumok.feature_extraction.FeatureHasher
now accepts string values. #6173 by Ryad Zenine and Devashish Deshpande.- Keyword arguments can now be supplied to
func
inpreprocessing.FunctionTransformer
by means of thekw_args
parameter. By Brian McFee. feature_selection.SelectKBest
andfeature_selection.SelectPercentile
now accept score functions that take X, y as input and return only the scores. By Nikolay Mayorov.
Model evaluation and meta-estimators
multiclass.OneVsOneClassifier
andmulticlass.OneVsRestClassifier
now supportpartial_fit
. By Asish Panda and Philipp Dowling.- Added support for substituting or disabling
pipeline.Pipeline
andpipeline.FeatureUnion
components using theset_params
interface that powerssklearn.grid_search
. See Selecting dimensionality reduction with Pipeline and GridSearchCV By Joel Nothman and Robert McGibbon. - The new
cv_results_
attribute ofmodel_selection.GridSearchCV
(andmodel_selection.RandomizedSearchCV
) can be easily imported into pandas as aDataFrame
. Ref Model Selection Enhancements and API Changes for more information. #6697 by Raghav RV. - Generalization of
model_selection.cross_val_predict
. One can pass method names such as predict_proba to be used in the cross validation framework instead of the default predict. By Ori Ziv and Sears Merritt. - The training scores and time taken for training followed by scoring for
each search candidate are now available at the
cv_results_
dict. See Model Selection Enhancements and API Changes for more information. #7325 by Eugene Chen and Raghav RV.
Metrics
- Added
labels
flag tometrics.log_loss
to explicitly provide the labels when the number of classes iny_true
andy_pred
differ. #7239 by Hong Guangguo with help from Mads Jensen and Nelson Liu. - Support sparse contingency matrices in cluster evaluation
(
metrics.cluster.supervised
) to scale to a large number of clusters. #7419 by Gregory Stupp and Joel Nothman. - Add
sample_weight
parameter tometrics.matthews_corrcoef
. By Jatin Shah and Raghav RV. - Speed up
metrics.silhouette_score
by using vectorized operations. By Manoj Kumar. - Add
sample_weight
parameter tometrics.confusion_matrix
. By Bernardo Stein.
Miscellaneous
- Added
n_jobs
parameter tofeature_selection.RFECV
to compute the score on the test folds in parallel. By Manoj Kumar - Codebase does not contain C/C++ cython generated files: they are generated during build. Distribution packages will still contain generated C/C++ files. By Arthur Mensch.
- Reduce the memory usage for 32-bit float input arrays of
utils.sparse_func.mean_variance_axis
andutils.sparse_func.incr_mean_variance_axis
by supporting cython fused types. By YenChen Lin. - The
ignore_warnings
now accept a category argument to ignore only the warnings of a specified type. By Thierry Guillemot. - Added parameter
return_X_y
and return type(data, target) : tuple
option toload_iris
dataset #7049,load_breast_cancer
dataset #7152,load_digits
dataset,load_diabetes
dataset,load_linnerud
dataset,load_boston
dataset #7154 by Manvendra Singh. - Simplification of the
clone
function, deprecate support for estimators that modify parameters in__init__
. #5540 by Andreas Müller. - When unpickling a scikit-learn estimator in a different version than the one
the estimator was trained with, a
UserWarning
is raised, see the documentation on model persistence for more details. (#7248) By Andreas Müller.
Bug fixes¶
Trees and ensembles
- Random forest, extra trees, decision trees and gradient boosting
won’t accept anymore
min_samples_split=1
as at least 2 samples are required to split a decision tree node. By Arnaud Joly ensemble.VotingClassifier
now raisesNotFittedError
ifpredict
,transform
orpredict_proba
are called on the non-fitted estimator. by Sebastian Raschka.- Fix bug where
ensemble.AdaBoostClassifier
andensemble.AdaBoostRegressor
would perform poorly if therandom_state
was fixed (#7411). By Joel Nothman. - Fix bug in ensembles with randomization where the ensemble would not
set
random_state
on base estimators in a pipeline or similar nesting. (#7411). Note, results forensemble.BaggingClassifier
ensemble.BaggingRegressor
,ensemble.AdaBoostClassifier
andensemble.AdaBoostRegressor
will now differ from previous versions. By Joel Nothman.
Linear, kernelized and related models
- Fixed incorrect gradient computation for
loss='squared_epsilon_insensitive'
inlinear_model.SGDClassifier
andlinear_model.SGDRegressor
(#6764). By Wenhua Yang. - Fix bug in
linear_model.LogisticRegressionCV
wheresolver='liblinear'
did not acceptclass_weights='balanced
. (#6817). By Tom Dupre la Tour. - Fix bug in
neighbors.RadiusNeighborsClassifier
where an error occurred when there were outliers being labelled and a weight function specified (#6902). By LeonieBorne. - Fix
linear_model.ElasticNet
sparse decision function to match output with dense in the multioutput case.
Decomposition, manifold learning and clustering
decomposition.RandomizedPCA
default number of iterated_power is 4 instead of 3. #5141 by Giorgio Patrini.utils.extmath.randomized_svd
performs 4 power iterations by default, instead or 0. In practice this is enough for obtaining a good approximation of the true eigenvalues/vectors in the presence of noise. When n_components is small (< .1 * min(X.shape)
) n_iter is set to 7, unless the user specifies a higher number. This improves precision with few components. #5299 by Giorgio Patrini.- Whiten/non-whiten inconsistency between components of
decomposition.PCA
anddecomposition.RandomizedPCA
(now factored into PCA, see the New features) is fixed. components_ are stored with no whitening. #5299 by Giorgio Patrini. - Fixed bug in
manifold.spectral_embedding
where diagonal of unnormalized Laplacian matrix was incorrectly set to 1. #4995 by Peter Fischer. - Fixed incorrect initialization of
utils.arpack.eigsh
on all occurrences. Affectscluster.bicluster.SpectralBiclustering
,decomposition.KernelPCA
,manifold.LocallyLinearEmbedding
, andmanifold.SpectralEmbedding
(#5012). By Peter Fischer. - Attribute
explained_variance_ratio_
calculated with the SVD solver ofdiscriminant_analysis.LinearDiscriminantAnalysis
now returns correct results. By JPFrancoia
Preprocessing and feature selection
preprocessing.data._transform_selected
now always passes a copy ofX
to transform function whencopy=True
(#7194). By Caio Oliveira.
Model evaluation and meta-estimators
model_selection.StratifiedKFold
now raises error if all n_labels for individual classes is less than n_folds. #6182 by Devashish Deshpande.- Fixed bug in
model_selection.StratifiedShuffleSplit
where train and test sample could overlap in some edge cases, see #6121 for more details. By Loic Esteve. - Fix in
sklearn.model_selection.StratifiedShuffleSplit
to return splits of sizetrain_size
andtest_size
in all cases (#6472). By Andreas Müller. - Cross-validation of
OneVsOneClassifier
andOneVsRestClassifier
now works with precomputed kernels. #7350 by Russell Smith. - Fix incomplete
predict_proba
method delegation frommodel_selection.GridSearchCV
tolinear_model.SGDClassifier
(#7159) by Yichuan Liu.
Metrics
- Fix bug in
metrics.silhouette_score
in which clusters of size 1 were incorrectly scored. They should get a score of 0. By Joel Nothman. - Fix bug in
metrics.silhouette_samples
so that it now works with arbitrary labels, not just those ranging from 0 to n_clusters - 1. - Fix bug where expected and adjusted mutual information were incorrect if
cluster contingency cells exceeded
2**16
. By Joel Nothman. metrics.pairwise.pairwise_distances
now converts arrays to boolean arrays when required inscipy.spatial.distance
. #5460 by Tom Dupre la Tour.- Fix sparse input support in
metrics.silhouette_score
as well as example examples/text/document_clustering.py. By YenChen Lin. metrics.roc_curve
andmetrics.precision_recall_curve
no longer roundy_score
values when creating ROC curves; this was causing problems for users with very small differences in scores (#7353).
Miscellaneous
model_selection.tests._search._check_param_grid
now works correctly with all types that extends/implements Sequence (except string), including range (Python 3.x) and xrange (Python 2.x). #7323 by Viacheslav Kovalevskyi.utils.extmath.randomized_range_finder
is more numerically stable when many power iterations are requested, since it applies LU normalization by default. Ifn_iter<2
numerical issues are unlikely, thus no normalization is applied. Other normalization options are available:'none', 'LU'
and'QR'
. #5141 by Giorgio Patrini.- Fix a bug where some formats of
scipy.sparse
matrix, and estimators with them as parameters, could not be passed tobase.clone
. By Loic Esteve. datasets.load_svmlight_file
now is able to read long int QID values. #7101 by Ibraim Ganiev.
API changes summary¶
Linear, kernelized and related models
residual_metric
has been deprecated inlinear_model.RANSACRegressor
. Useloss
instead. By Manoj Kumar.- Access to public attributes
.X_
and.y_
has been deprecated inisotonic.IsotonicRegression
. By Jonathan Arfa.
Decomposition, manifold learning and clustering
- The old
mixture.DPGMM
is deprecated in favor of the newmixture.BayesianGaussianMixture
(with the parameterweight_concentration_prior_type='dirichlet_process'
). The new class solves the computational problems of the old class and computes the Gaussian mixture with a Dirichlet process prior faster than before. #7295 by Wei Xue and Thierry Guillemot. - The old
mixture.VBGMM
is deprecated in favor of the newmixture.BayesianGaussianMixture
(with the parameterweight_concentration_prior_type='dirichlet_distribution'
). The new class solves the computational problems of the old class and computes the Variational Bayesian Gaussian mixture faster than before. #6651 by Wei Xue and Thierry Guillemot. - The old
mixture.GMM
is deprecated in favor of the newmixture.GaussianMixture
. The new class computes the Gaussian mixture faster than before and some of computational problems have been solved. #6666 by Wei Xue and Thierry Guillemot.
Model evaluation and meta-estimators
- The
sklearn.cross_validation
,sklearn.grid_search
andsklearn.learning_curve
have been deprecated and the classes and functions have been reorganized into thesklearn.model_selection
module. Ref Model Selection Enhancements and API Changes for more information. #4294 by Raghav RV. - The
grid_scores_
attribute ofmodel_selection.GridSearchCV
andmodel_selection.RandomizedSearchCV
is deprecated in favor of the attributecv_results_
. Ref Model Selection Enhancements and API Changes for more information. #6697 by Raghav RV. - The parameters
n_iter
orn_folds
in old CV splitters are replaced by the new parametern_splits
since it can provide a consistent and unambiguous interface to represent the number of train-test splits. #7187 by YenChen Lin. classes
parameter was renamed tolabels
inmetrics.hamming_loss
. #7260 by Sebastián Vanrell.- The splitter classes
LabelKFold
,LabelShuffleSplit
,LeaveOneLabelOut
andLeavePLabelsOut
are renamed tomodel_selection.GroupKFold
,model_selection.GroupShuffleSplit
,model_selection.LeaveOneGroupOut
andmodel_selection.LeavePGroupsOut
respectively. Also the parameterlabels
in thesplit
method of the newly renamed splittersmodel_selection.LeaveOneGroupOut
andmodel_selection.LeavePGroupsOut
is renamed togroups
. Additionally inmodel_selection.LeavePGroupsOut
, the parametern_labels
is renamed ton_groups
. #6660 by Raghav RV. - Error and loss names for
scoring
parameters are now prefixed by'neg_'
, such asneg_mean_squared_error
. The unprefixed versions are deprecated and will be removed in version 0.20. #7261 by Tim Head.
Code Contributors¶
Aditya Joshi, Alejandro, Alexander Fabisch, Alexander Loginov, Alexander Minyushkin, Alexander Rudy, Alexandre Abadie, Alexandre Abraham, Alexandre Gramfort, Alexandre Saint, alexfields, Alvaro Ulloa, alyssaq, Amlan Kar, Andreas Mueller, andrew giessel, Andrew Jackson, Andrew McCulloh, Andrew Murray, Anish Shah, Arafat, Archit Sharma, Ariel Rokem, Arnaud Joly, Arnaud Rachez, Arthur Mensch, Ash Hoover, asnt, b0noI, Behzad Tabibian, Bernardo, Bernhard Kratzwald, Bhargav Mangipudi, blakeflei, Boyuan Deng, Brandon Carter, Brett Naul, Brian McFee, Caio Oliveira, Camilo Lamus, Carol Willing, Cass, CeShine Lee, Charles Truong, Chyi-Kwei Yau, CJ Carey, codevig, Colin Ni, Dan Shiebler, Daniel, Daniel Hnyk, David Ellis, David Nicholson, David Staub, David Thaler, David Warshaw, Davide Lasagna, Deborah, definitelyuncertain, Didi Bar-Zev, djipey, dsquareindia, edwinENSAE, Elias Kuthe, Elvis DOHMATOB, Ethan White, Fabian Pedregosa, Fabio Ticconi, fisache, Florian Wilhelm, Francis, Francis O’Donovan, Gael Varoquaux, Ganiev Ibraim, ghg, Gilles Louppe, Giorgio Patrini, Giovanni Cherubin, Giovanni Lanzani, Glenn Qian, Gordon Mohr, govin-vatsan, Graham Clenaghan, Greg Reda, Greg Stupp, Guillaume Lemaitre, Gustav Mörtberg, halwai, Harizo Rajaona, Harry Mavroforakis, hashcode55, hdmetor, Henry Lin, Hobson Lane, Hugo Bowne-Anderson, Igor Andriushchenko, Imaculate, Inki Hwang, Isaac Sijaranamual, Ishank Gulati, Issam Laradji, Iver Jordal, jackmartin, Jacob Schreiber, Jake Vanderplas, James Fiedler, James Routley, Jan Zikes, Janna Brettingen, jarfa, Jason Laska, jblackburne, jeff levesque, Jeffrey Blackburne, Jeffrey04, Jeremy Hintz, jeremynixon, Jeroen, Jessica Yung, Jill-Jênn Vie, Jimmy Jia, Jiyuan Qian, Joel Nothman, johannah, John, John Boersma, John Kirkham, John Moeller, jonathan.striebel, joncrall, Jordi, Joseph Munoz, Joshua Cook, JPFrancoia, jrfiedler, JulianKahnert, juliathebrave, kaichogami, KamalakerDadi, Kenneth Lyons, Kevin Wang, kingjr, kjell, Konstantin Podshumok, Kornel Kielczewski, Krishna Kalyan, krishnakalyan3, Kvle Putnam, Kyle Jackson, Lars Buitinck, ldavid, LeiG, LeightonZhang, Leland McInnes, Liang-Chi Hsieh, Lilian Besson, lizsz, Loic Esteve, Louis Tiao, Léonie Borne, Mads Jensen, Maniteja Nandana, Manoj Kumar, Manvendra Singh, Marco, Mario Krell, Mark Bao, Mark Szepieniec, Martin Madsen, MartinBpr, MaryanMorel, Massil, Matheus, Mathieu Blondel, Mathieu Dubois, Matteo, Matthias Ekman, Max Moroz, Michael Scherer, michiaki ariga, Mikhail Korobov, Moussa Taifi, mrandrewandrade, Mridul Seth, nadya-p, Naoya Kanai, Nate George, Nelle Varoquaux, Nelson Liu, Nick James, NickleDave, Nico, Nicolas Goix, Nikolay Mayorov, ningchi, nlathia, okbalefthanded, Okhlopkov, Olivier Grisel, Panos Louridas, Paul Strickland, Perrine Letellier, pestrickland, Peter Fischer, Pieter, Ping-Yao, Chang, practicalswift, Preston Parry, Qimu Zheng, Rachit Kansal, Raghav RV, Ralf Gommers, Ramana.S, Rammig, Randy Olson, Rob Alexander, Robert Lutz, Robin Schucker, Rohan Jain, Ruifeng Zheng, Ryan Yu, Rémy Léone, saihttam, Saiwing Yeung, Sam Shleifer, Samuel St-Jean, Sartaj Singh, Sasank Chilamkurthy, saurabh.bansod, Scott Andrews, Scott Lowe, seales, Sebastian Raschka, Sebastian Saeger, Sebastián Vanrell, Sergei Lebedev, shagun Sodhani, shanmuga cv, Shashank Shekhar, shawpan, shengxiduan, Shota, shuckle16, Skipper Seabold, sklearn-ci, SmedbergM, srvanrell, Sébastien Lerique, Taranjeet, themrmax, Thierry, Thierry Guillemot, Thomas, Thomas Hallock, Thomas Moreau, Tim Head, tKammy, toastedcornflakes, Tom, TomDLT, Toshihiro Kamishima, tracer0tong, Trent Hauck, trevorstephens, Tue Vo, Varun, Varun Jewalikar, Viacheslav, Vighnesh Birodkar, Vikram, Villu Ruusmann, Vinayak Mehta, walter, waterponey, Wenhua Yang, Wenjian Huang, Will Welch, wyseguy7, xyguo, yanlend, Yaroslav Halchenko, yelite, Yen, YenChenLin, Yichuan Liu, Yoav Ram, Yoshiki, Zheng RuiFeng, zivori, Óscar Nájera
Version 0.17.1¶
February 18, 2016
Changelog¶
Bug fixes¶
- Upgrade vendored joblib to version 0.9.4 that fixes an important bug in
joblib.Parallel
that can silently yield to wrong results when working on datasets larger than 1MB: https://github.com/joblib/joblib/blob/0.9.4/CHANGES.rst - Fixed reading of Bunch pickles generated with scikit-learn
version <= 0.16. This can affect users who have already
downloaded a dataset with scikit-learn 0.16 and are loading it
with scikit-learn 0.17. See #6196 for
how this affected
datasets.fetch_20newsgroups
. By Loic Esteve. - Fixed a bug that prevented using ROC AUC score to perform grid search on several CPU / cores on large arrays. See #6147 By Olivier Grisel.
- Fixed a bug that prevented to properly set the
presort
parameter inensemble.GradientBoostingRegressor
. See #5857 By Andrew McCulloh. - Fixed a joblib error when evaluating the perplexity of a
decomposition.LatentDirichletAllocation
model. See #6258 By Chyi-Kwei Yau.
Version 0.17¶
November 5, 2015
Changelog¶
New features¶
- All the Scaler classes but
preprocessing.RobustScaler
can be fitted online by calling partial_fit. By Giorgio Patrini. - The new class
ensemble.VotingClassifier
implements a “majority rule” / “soft voting” ensemble classifier to combine estimators for classification. By Sebastian Raschka. - The new class
preprocessing.RobustScaler
provides an alternative topreprocessing.StandardScaler
for feature-wise centering and range normalization that is robust to outliers. By Thomas Unterthiner. - The new class
preprocessing.MaxAbsScaler
provides an alternative topreprocessing.MinMaxScaler
for feature-wise range normalization when the data is already centered or sparse. By Thomas Unterthiner. - The new class
preprocessing.FunctionTransformer
turns a Python function into aPipeline
-compatible transformer object. By Joe Jevnik. - The new classes
cross_validation.LabelKFold
andcross_validation.LabelShuffleSplit
generate train-test folds, respectively similar tocross_validation.KFold
andcross_validation.ShuffleSplit
, except that the folds are conditioned on a label array. By Brian McFee, Jean Kossaifi and Gilles Louppe. decomposition.LatentDirichletAllocation
implements the Latent Dirichlet Allocation topic model with online variational inference. By Chyi-Kwei Yau, with code based on an implementation by Matt Hoffman. (#3659)- The new solver
sag
implements a Stochastic Average Gradient descent and is available in bothlinear_model.LogisticRegression
andlinear_model.Ridge
. This solver is very efficient for large datasets. By Danny Sullivan and Tom Dupre la Tour. (#4738) - The new solver
cd
implements a Coordinate Descent indecomposition.NMF
. Previous solver based on Projected Gradient is still available setting new parametersolver
topg
, but is deprecated and will be removed in 0.19, along withdecomposition.ProjectedGradientNMF
and parameterssparseness
,eta
,beta
andnls_max_iter
. New parametersalpha
andl1_ratio
control L1 and L2 regularization, andshuffle
adds a shuffling step in thecd
solver. By Tom Dupre la Tour and Mathieu Blondel.
Enhancements¶
manifold.TSNE
now supports approximate optimization via the Barnes-Hut method, leading to much faster fitting. By Christopher Erick Moody. (#4025)cluster.mean_shift_.MeanShift
now supports parallel execution, as implemented in themean_shift
function. By Martino Sorbaro.naive_bayes.GaussianNB
now supports fitting withsample_weight
. By Jan Hendrik Metzen.dummy.DummyClassifier
now supports a prior fitting strategy. By Arnaud Joly.- Added a
fit_predict
method formixture.GMM
and subclasses. By Cory Lorenz. - Added the
metrics.label_ranking_loss
metric. By Arnaud Joly. - Added the
metrics.cohen_kappa_score
metric. - Added a
warm_start
constructor parameter to the bagging ensemble models to increase the size of the ensemble. By Tim Head. - Added option to use multi-output regression metrics without averaging. By Konstantin Shmelkov and Michael Eickenberg.
- Added
stratify
option tocross_validation.train_test_split
for stratified splitting. By Miroslav Batchkarov. - The
tree.export_graphviz
function now supports aesthetic improvements fortree.DecisionTreeClassifier
andtree.DecisionTreeRegressor
, including options for coloring nodes by their majority class or impurity, showing variable names, and using node proportions instead of raw sample counts. By Trevor Stephens. - Improved speed of
newton-cg
solver inlinear_model.LogisticRegression
, by avoiding loss computation. By Mathieu Blondel and Tom Dupre la Tour. - The
class_weight="auto"
heuristic in classifiers supportingclass_weight
was deprecated and replaced by theclass_weight="balanced"
option, which has a simpler formula and interpretation. By Hanna Wallach and Andreas Müller. - Add
class_weight
parameter to automatically weight samples by class frequency forlinear_model.PassiveAgressiveClassifier
. By Trevor Stephens. - Added backlinks from the API reference pages to the user guide. By Andreas Müller.
- The
labels
parameter tosklearn.metrics.f1_score
,sklearn.metrics.fbeta_score
,sklearn.metrics.recall_score
andsklearn.metrics.precision_score
has been extended. It is now possible to ignore one or more labels, such as where a multiclass problem has a majority class to ignore. By Joel Nothman. - Add
sample_weight
support tolinear_model.RidgeClassifier
. By Trevor Stephens. - Provide an option for sparse output from
sklearn.metrics.pairwise.cosine_similarity
. By Jaidev Deshpande. - Add
minmax_scale
to provide a function interface forMinMaxScaler
. By Thomas Unterthiner. dump_svmlight_file
now handles multi-label datasets. By Chih-Wei Chang.- RCV1 dataset loader (
sklearn.datasets.fetch_rcv1
). By Tom Dupre la Tour. - The “Wisconsin Breast Cancer” classical two-class classification dataset
is now included in scikit-learn, available with
sklearn.dataset.load_breast_cancer
. - Upgraded to joblib 0.9.3 to benefit from the new automatic batching of
short tasks. This makes it possible for scikit-learn to benefit from
parallelism when many very short tasks are executed in parallel, for
instance by the
grid_search.GridSearchCV
meta-estimator withn_jobs > 1
used with a large grid of parameters on a small dataset. By Vlad Niculae, Olivier Grisel and Loic Esteve. - For more details about changes in joblib 0.9.3 see the release notes: https://github.com/joblib/joblib/blob/master/CHANGES.rst#release-093
- Improved speed (3 times per iteration) of
decomposition.DictLearning
with coordinate descent method fromlinear_model.Lasso
. By Arthur Mensch. - Parallel processing (threaded) for queries of nearest neighbors (using the ball-tree) by Nikolay Mayorov.
- Allow
datasets.make_multilabel_classification
to output a sparsey
. By Kashif Rasul. cluster.DBSCAN
now accepts a sparse matrix of precomputed distances, allowing memory-efficient distance precomputation. By Joel Nothman.tree.DecisionTreeClassifier
now exposes anapply
method for retrieving the leaf indices samples are predicted as. By Daniel Galvez and Gilles Louppe.- Speed up decision tree regressors, random forest regressors, extra trees regressors and gradient boosting estimators by computing a proxy of the impurity improvement during the tree growth. The proxy quantity is such that the split that maximizes this value also maximizes the impurity improvement. By Arnaud Joly, Jacob Schreiber and Gilles Louppe.
- Speed up tree based methods by reducing the number of computations needed when computing the impurity measure taking into account linear relationship of the computed statistics. The effect is particularly visible with extra trees and on datasets with categorical or sparse features. By Arnaud Joly.
ensemble.GradientBoostingRegressor
andensemble.GradientBoostingClassifier
now expose anapply
method for retrieving the leaf indices each sample ends up in under each try. By Jacob Schreiber.- Add
sample_weight
support tolinear_model.LinearRegression
. By Sonny Hu. (##4881) - Add
n_iter_without_progress
tomanifold.TSNE
to control the stopping criterion. By Santi Villalba. (#5186) - Added optional parameter
random_state
inlinear_model.Ridge
, to set the seed of the pseudo random generator used insag
solver. By Tom Dupre la Tour. - Added optional parameter
warm_start
inlinear_model.LogisticRegression
. If set to True, the solverslbfgs
,newton-cg
andsag
will be initialized with the coefficients computed in the previous fit. By Tom Dupre la Tour. - Added
sample_weight
support tolinear_model.LogisticRegression
for thelbfgs
,newton-cg
, andsag
solvers. By Valentin Stolbunov. Support added to theliblinear
solver. By Manoj Kumar. - Added optional parameter
presort
toensemble.GradientBoostingRegressor
andensemble.GradientBoostingClassifier
, keeping default behavior the same. This allows gradient boosters to turn off presorting when building deep trees or using sparse data. By Jacob Schreiber. - Altered
metrics.roc_curve
to drop unnecessary thresholds by default. By Graham Clenaghan. - Added
feature_selection.SelectFromModel
meta-transformer which can be used along with estimators that have coef_ or feature_importances_ attribute to select important features of the input data. By Maheshakya Wijewardena, Joel Nothman and Manoj Kumar. - Added
metrics.pairwise.laplacian_kernel
. By Clyde Fare. covariance.GraphLasso
allows separate control of the convergence criterion for the Elastic-Net subproblem via theenet_tol
parameter.- Improved verbosity in
decomposition.DictionaryLearning
. ensemble.RandomForestClassifier
andensemble.RandomForestRegressor
no longer explicitly store the samples used in bagging, resulting in a much reduced memory footprint for storing random forest models.- Added
positive
option tolinear_model.Lars
andlinear_model.lars_path
to force coefficients to be positive. (#5131) - Added the
X_norm_squared
parameter tometrics.pairwise.euclidean_distances
to provide precomputed squared norms forX
. - Added the
fit_predict
method topipeline.Pipeline
. - Added the
preprocessing.min_max_scale
function.
Bug fixes¶
- Fixed non-determinism in
dummy.DummyClassifier
with sparse multi-label output. By Andreas Müller. - Fixed the output shape of
linear_model.RANSACRegressor
to(n_samples, )
. By Andreas Müller. - Fixed bug in
decomposition.DictLearning
whenn_jobs < 0
. By Andreas Müller. - Fixed bug where
grid_search.RandomizedSearchCV
could consume a lot of memory for large discrete grids. By Joel Nothman. - Fixed bug in
linear_model.LogisticRegressionCV
where penalty was ignored in the final fit. By Manoj Kumar. - Fixed bug in
ensemble.forest.ForestClassifier
while computing oob_score and X is a sparse.csc_matrix. By Ankur Ankan. - All regressors now consistently handle and warn when given
y
that is of shape(n_samples, 1)
. By Andreas Müller and Henry Lin. (#5431) - Fix in
cluster.KMeans
cluster reassignment for sparse input by Lars Buitinck. - Fixed a bug in
lda.LDA
that could cause asymmetric covariance matrices when using shrinkage. By Martin Billinger. - Fixed
cross_validation.cross_val_predict
for estimators with sparse predictions. By Buddha Prakash. - Fixed the
predict_proba
method oflinear_model.LogisticRegression
to use soft-max instead of one-vs-rest normalization. By Manoj Kumar. (#5182) - Fixed the
partial_fit
method oflinear_model.SGDClassifier
when called withaverage=True
. By Andrew Lamb. (#5282) - Dataset fetchers use different filenames under Python 2 and Python 3 to avoid pickling compatibility issues. By Olivier Grisel. (#5355)
- Fixed a bug in
naive_bayes.GaussianNB
which caused classification results to depend on scale. By Jake Vanderplas. - Fixed temporarily
linear_model.Ridge
, which was incorrect when fitting the intercept in the case of sparse data. The fix automatically changes the solver to ‘sag’ in this case. #5360 by Tom Dupre la Tour. - Fixed a performance bug in
decomposition.RandomizedPCA
on data with a large number of features and fewer samples. (#4478) By Andreas Müller, Loic Esteve and Giorgio Patrini. - Fixed bug in
cross_decomposition.PLS
that yielded unstable and platform dependent output, and failed on fit_transform. By Arthur Mensch. - Fixes to the
Bunch
class used to store datasets. - Fixed
ensemble.plot_partial_dependence
ignoring thepercentiles
parameter. - Providing a
set
as vocabulary inCountVectorizer
no longer leads to inconsistent results when pickling. - Fixed the conditions on when a precomputed Gram matrix needs to
be recomputed in
linear_model.LinearRegression
,linear_model.OrthogonalMatchingPursuit
,linear_model.Lasso
andlinear_model.ElasticNet
. - Fixed inconsistent memory layout in the coordinate descent solver
that affected
linear_model.DictionaryLearning
andcovariance.GraphLasso
. (#5337) By Olivier Grisel. manifold.LocallyLinearEmbedding
no longer ignores thereg
parameter.- Nearest Neighbor estimators with custom distance metrics can now be pickled. (#4362)
- Fixed a bug in
pipeline.FeatureUnion
wheretransformer_weights
were not properly handled when performing grid-searches. - Fixed a bug in
linear_model.LogisticRegression
andlinear_model.LogisticRegressionCV
when usingclass_weight='balanced'```or ``class_weight='auto'
. By Tom Dupre la Tour. - Fixed bug #5495 when doing OVR(SVC(decision_function_shape=”ovr”)). Fixed by Elvis Dohmatob.
API changes summary¶
- Attribute data_min, data_max and data_range in
preprocessing.MinMaxScaler
are deprecated and won’t be available from 0.19. Instead, the class now exposes data_min_, data_max_ and data_range_. By Giorgio Patrini. - All Scaler classes now have an scale_ attribute, the feature-wise
rescaling applied by their transform methods. The old attribute std_
in
preprocessing.StandardScaler
is deprecated and superseded by scale_; it won’t be available in 0.19. By Giorgio Patrini. svm.SVC`
andsvm.NuSVC
now have andecision_function_shape
parameter to make their decision function of shape(n_samples, n_classes)
by settingdecision_function_shape='ovr'
. This will be the default behavior starting in 0.19. By Andreas Müller.- Passing 1D data arrays as input to estimators is now deprecated as it
caused confusion in how the array elements should be interpreted
as features or as samples. All data arrays are now expected
to be explicitly shaped
(n_samples, n_features)
. By Vighnesh Birodkar. lda.LDA
andqda.QDA
have been moved todiscriminant_analysis.LinearDiscriminantAnalysis
anddiscriminant_analysis.QuadraticDiscriminantAnalysis
.- The
store_covariance
andtol
parameters have been moved from the fit method to the constructor indiscriminant_analysis.LinearDiscriminantAnalysis
and thestore_covariances
andtol
parameters have been moved from the fit method to the constructor indiscriminant_analysis.QuadraticDiscriminantAnalysis
. - Models inheriting from
_LearntSelectorMixin
will no longer support the transform methods. (i.e, RandomForests, GradientBoosting, LogisticRegression, DecisionTrees, SVMs and SGD related models). Wrap these models around the metatransfomerfeature_selection.SelectFromModel
to remove features (according to coefs_ or feature_importances_) which are below a certain threshold value instead. cluster.KMeans
re-runs cluster-assignments in case of non-convergence, to ensure consistency ofpredict(X)
andlabels_
. By Vighnesh Birodkar.- Classifier and Regressor models are now tagged as such using the
_estimator_type
attribute. - Cross-validation iterators always provide indices into training and test set, not boolean masks.
- The
decision_function
on all regressors was deprecated and will be removed in 0.19. Usepredict
instead. datasets.load_lfw_pairs
is deprecated and will be removed in 0.19. Usedatasets.fetch_lfw_pairs
instead.- The deprecated
hmm
module was removed. - The deprecated
Bootstrap
cross-validation iterator was removed. - The deprecated
Ward
andWardAgglomerative
classes have been removed. Useclustering.AgglomerativeClustering
instead. cross_validation.check_cv
is now a public function.- The property
residues_
oflinear_model.LinearRegression
is deprecated and will be removed in 0.19. - The deprecated
n_jobs
parameter oflinear_model.LinearRegression
has been moved to the constructor. - Removed deprecated
class_weight
parameter fromlinear_model.SGDClassifier
’sfit
method. Use the construction parameter instead. - The deprecated support for the sequence of sequences (or list of lists) multilabel
format was removed. To convert to and from the supported binary
indicator matrix format, use
MultiLabelBinarizer
. - The behavior of calling the
inverse_transform
method ofPipeline.pipeline
will change in 0.19. It will no longer reshape one-dimensional input to two-dimensional input. - The deprecated attributes
indicator_matrix_
,multilabel_
andclasses_
ofpreprocessing.LabelBinarizer
were removed. - Using
gamma=0
insvm.SVC
andsvm.SVR
to automatically set the gamma to1. / n_features
is deprecated and will be removed in 0.19. Usegamma="auto"
instead.
Code Contributors¶
Aaron Schumacher, Adithya Ganesh, akitty, Alexandre Gramfort, Alexey Grigorev, Ali Baharev, Allen Riddell, Ando Saabas, Andreas Mueller, Andrew Lamb, Anish Shah, Ankur Ankan, Anthony Erlinger, Ari Rouvinen, Arnaud Joly, Arnaud Rachez, Arthur Mensch, banilo, Barmaley.exe, benjaminirving, Boyuan Deng, Brett Naul, Brian McFee, Buddha Prakash, Chi Zhang, Chih-Wei Chang, Christof Angermueller, Christoph Gohlke, Christophe Bourguignat, Christopher Erick Moody, Chyi-Kwei Yau, Cindy Sridharan, CJ Carey, Clyde-fare, Cory Lorenz, Dan Blanchard, Daniel Galvez, Daniel Kronovet, Danny Sullivan, Data1010, David, David D Lowe, David Dotson, djipey, Dmitry Spikhalskiy, Donne Martin, Dougal J. Sutherland, Dougal Sutherland, edson duarte, Eduardo Caro, Eric Larson, Eric Martin, Erich Schubert, Fernando Carrillo, Frank C. Eckert, Frank Zalkow, Gael Varoquaux, Ganiev Ibraim, Gilles Louppe, Giorgio Patrini, giorgiop, Graham Clenaghan, Gryllos Prokopis, gwulfs, Henry Lin, Hsuan-Tien Lin, Immanuel Bayer, Ishank Gulati, Jack Martin, Jacob Schreiber, Jaidev Deshpande, Jake Vanderplas, Jan Hendrik Metzen, Jean Kossaifi, Jeffrey04, Jeremy, jfraj, Jiali Mei, Joe Jevnik, Joel Nothman, John Kirkham, John Wittenauer, Joseph, Joshua Loyal, Jungkook Park, KamalakerDadi, Kashif Rasul, Keith Goodman, Kian Ho, Konstantin Shmelkov, Kyler Brown, Lars Buitinck, Lilian Besson, Loic Esteve, Louis Tiao, maheshakya, Maheshakya Wijewardena, Manoj Kumar, MarkTab marktab.net, Martin Ku, Martin Spacek, MartinBpr, martinosorb, MaryanMorel, Masafumi Oyamada, Mathieu Blondel, Matt Krump, Matti Lyra, Maxim Kolganov, mbillinger, mhg, Michael Heilman, Michael Patterson, Miroslav Batchkarov, Nelle Varoquaux, Nicolas, Nikolay Mayorov, Olivier Grisel, Omer Katz, Óscar Nájera, Pauli Virtanen, Peter Fischer, Peter Prettenhofer, Phil Roth, pianomania, Preston Parry, Raghav RV, Rob Zinkov, Robert Layton, Rohan Ramanath, Saket Choudhary, Sam Zhang, santi, saurabh.bansod, scls19fr, Sebastian Raschka, Sebastian Saeger, Shivan Sornarajah, SimonPL, sinhrks, Skipper Seabold, Sonny Hu, sseg, Stephen Hoover, Steven De Gryze, Steven Seguin, Theodore Vasiloudis, Thomas Unterthiner, Tiago Freitas Pereira, Tian Wang, Tim Head, Timothy Hopper, tokoroten, Tom Dupré la Tour, Trevor Stephens, Valentin Stolbunov, Vighnesh Birodkar, Vinayak Mehta, Vincent, Vincent Michel, vstolbunov, wangz10, Wei Xue, Yucheng Low, Yury Zhauniarovich, Zac Stewart, zhai_pro, Zichen Wang
Version 0.16.1¶
April 14, 2015
Changelog¶
Bug fixes¶
- Allow input data larger than
block_size
incovariance.LedoitWolf
by Andreas Müller. - Fix a bug in
isotonic.IsotonicRegression
deduplication that caused unstable result incalibration.CalibratedClassifierCV
by Jan Hendrik Metzen. - Fix sorting of labels in func:preprocessing.label_binarize by Michael Heilman.
- Fix several stability and convergence issues in
cross_decomposition.CCA
andcross_decomposition.PLSCanonical
by Andreas Müller - Fix a bug in
cluster.KMeans
whenprecompute_distances=False
on fortran-ordered data. - Fix a speed regression in
ensemble.RandomForestClassifier
’spredict
andpredict_proba
by Andreas Müller. - Fix a regression where
utils.shuffle
converted lists and dataframes to arrays, by Olivier Grisel
Version 0.16¶
March 26, 2015
Highlights¶
- Speed improvements (notably in
cluster.DBSCAN
), reduced memory requirements, bug-fixes and better default settings. - Multinomial Logistic regression and a path algorithm in
linear_model.LogisticRegressionCV
. - Out-of core learning of PCA via
decomposition.IncrementalPCA
. - Probability callibration of classifiers using
calibration.CalibratedClassifierCV
. cluster.Birch
clustering method for large-scale datasets.- Scalable approximate nearest neighbors search with Locality-sensitive
hashing forests in
neighbors.LSHForest
. - Improved error messages and better validation when using malformed input data.
- More robust integration with pandas dataframes.
Changelog¶
New features¶
- The new
neighbors.LSHForest
implements locality-sensitive hashing for approximate nearest neighbors search. By Maheshakya Wijewardena. - Added
svm.LinearSVR
. This class uses the liblinear implementation of Support Vector Regression which is much faster for large sample sizes thansvm.SVR
with linear kernel. By Fabian Pedregosa and Qiang Luo. - Incremental fit for
GaussianNB
. - Added
sample_weight
support todummy.DummyClassifier
anddummy.DummyRegressor
. By Arnaud Joly. - Added the
metrics.label_ranking_average_precision_score
metrics. By Arnaud Joly. - Add the
metrics.coverage_error
metrics. By Arnaud Joly. - Added
linear_model.LogisticRegressionCV
. By Manoj Kumar, Fabian Pedregosa, Gael Varoquaux and Alexandre Gramfort. - Added
warm_start
constructor parameter to make it possible for any trained forest model to grow additional trees incrementally. By Laurent Direr. - Added
sample_weight
support toensemble.GradientBoostingClassifier
andensemble.GradientBoostingRegressor
. By Peter Prettenhofer. - Added
decomposition.IncrementalPCA
, an implementation of the PCA algorithm that supports out-of-core learning with apartial_fit
method. By Kyle Kastner. - Averaged SGD for
SGDClassifier
andSGDRegressor
By Danny Sullivan. - Added
cross_val_predict
function which computes cross-validated estimates. By Luis Pedro Coelho - Added
linear_model.TheilSenRegressor
, a robust generalized-median-based estimator. By Florian Wilhelm. - Added
metrics.median_absolute_error
, a robust metric. By Gael Varoquaux and Florian Wilhelm. - Add
cluster.Birch
, an online clustering algorithm. By Manoj Kumar, Alexandre Gramfort and Joel Nothman. - Added shrinkage support to
discriminant_analysis.LinearDiscriminantAnalysis
using two new solvers. By Clemens Brunner and Martin Billinger. - Added
kernel_ridge.KernelRidge
, an implementation of kernelized ridge regression. By Mathieu Blondel and Jan Hendrik Metzen. - All solvers in
linear_model.Ridge
now support sample_weight. By Mathieu Blondel. - Added
cross_validation.PredefinedSplit
cross-validation for fixed user-provided cross-validation folds. By Thomas Unterthiner. - Added
calibration.CalibratedClassifierCV
, an approach for calibrating the predicted probabilities of a classifier. By Alexandre Gramfort, Jan Hendrik Metzen, Mathieu Blondel and Balazs Kegl.
Enhancements¶
- Add option
return_distance
inhierarchical.ward_tree
to return distances between nodes for both structured and unstructured versions of the algorithm. By Matteo Visconti di Oleggio Castello. The same option was added inhierarchical.linkage_tree
. By Manoj Kumar - Add support for sample weights in scorer objects. Metrics with sample weight support will automatically benefit from it. By Noel Dawe and Vlad Niculae.
- Added
newton-cg
and lbfgs solver support inlinear_model.LogisticRegression
. By Manoj Kumar. - Add
selection="random"
parameter to implement stochastic coordinate descent forlinear_model.Lasso
,linear_model.ElasticNet
and related. By Manoj Kumar. - Add
sample_weight
parameter tometrics.jaccard_similarity_score
andmetrics.log_loss
. By Jatin Shah. - Support sparse multilabel indicator representation in
preprocessing.LabelBinarizer
andmulticlass.OneVsRestClassifier
(by Hamzeh Alsalhi with thanks to Rohit Sivaprasad), as well as evaluation metrics (by Joel Nothman). - Add
sample_weight
parameter to metrics.jaccard_similarity_score. By Jatin Shah. - Add support for multiclass in metrics.hinge_loss. Added
labels=None
as optional parameter. By Saurabh Jha. - Add
sample_weight
parameter to metrics.hinge_loss. By Saurabh Jha. - Add
multi_class="multinomial"
option inlinear_model.LogisticRegression
to implement a Logistic Regression solver that minimizes the cross-entropy or multinomial loss instead of the default One-vs-Rest setting. Supports lbfgs and newton-cg solvers. By Lars Buitinck and Manoj Kumar. Solver option newton-cg by Simon Wu. DictVectorizer
can now performfit_transform
on an iterable in a single pass, when giving the optionsort=False
. By Dan Blanchard.GridSearchCV
andRandomizedSearchCV
can now be configured to work with estimators that may fail and raise errors on individual folds. This option is controlled by the error_score parameter. This does not affect errors raised on re-fit. By Michal Romaniuk.- Add
digits
parameter to metrics.classification_report to allow report to show different precision of floating point numbers. By Ian Gilmore. - Add a quantile prediction strategy to the
dummy.DummyRegressor
. By Aaron Staple. - Add
handle_unknown
option topreprocessing.OneHotEncoder
to handle unknown categorical features more gracefully during transform. By Manoj Kumar. - Added support for sparse input data to decision trees and their ensembles. By Fares Hedyati and Arnaud Joly.
- Optimized
cluster.AffinityPropagation
by reducing the number of memory allocations of large temporary data-structures. By Antony Lee. - Parellization of the computation of feature importances in random forest. By Olivier Grisel and Arnaud Joly.
- Add
n_iter_
attribute to estimators that accept amax_iter
attribute in their constructor. By Manoj Kumar. - Added decision function for
multiclass.OneVsOneClassifier
By Raghav RV and Kyle Beauchamp. neighbors.kneighbors_graph
andradius_neighbors_graph
support non-Euclidean metrics. By Manoj Kumar- Parameter
connectivity
incluster.AgglomerativeClustering
and family now accept callables that return a connectivity matrix. By Manoj Kumar. - Sparse support for
paired_distances
. By Joel Nothman. cluster.DBSCAN
now supports sparse input and sample weights and has been optimized: the inner loop has been rewritten in Cython and radius neighbors queries are now computed in batch. By Joel Nothman and Lars Buitinck.- Add
class_weight
parameter to automatically weight samples by class frequency forensemble.RandomForestClassifier
,tree.DecisionTreeClassifier
,ensemble.ExtraTreesClassifier
andtree.ExtraTreeClassifier
. By Trevor Stephens. grid_search.RandomizedSearchCV
now does sampling without replacement if all parameters are given as lists. By Andreas Müller.- Parallelized calculation of
pairwise_distances
is now supported for scipy metrics and custom callables. By Joel Nothman. - Allow the fitting and scoring of all clustering algorithms in
pipeline.Pipeline
. By Andreas Müller. - More robust seeding and improved error messages in
cluster.MeanShift
by Andreas Müller. - Make the stopping criterion for
mixture.GMM
,mixture.DPGMM
andmixture.VBGMM
less dependent on the number of samples by thresholding the average log-likelihood change instead of its sum over all samples. By Hervé Bredin. - The outcome of
manifold.spectral_embedding
was made deterministic by flipping the sign of eigenvectors. By Hasil Sharma. - Significant performance and memory usage improvements in
preprocessing.PolynomialFeatures
. By Eric Martin. - Numerical stability improvements for
preprocessing.StandardScaler
andpreprocessing.scale
. By Nicolas Goix svm.SVC
fitted on sparse input now implementsdecision_function
. By Rob Zinkov and Andreas Müller.cross_validation.train_test_split
now preserves the input type, instead of converting to numpy arrays.
Documentation improvements¶
- Added example of using
FeatureUnion
for heterogeneous input. By Matt Terry - Documentation on scorers was improved, to highlight the handling of loss functions. By Matt Pico.
- A discrepancy between liblinear output and scikit-learn’s wrappers is now noted. By Manoj Kumar.
- Improved documentation generation: examples referring to a class or function are now shown in a gallery on the class/function’s API reference page. By Joel Nothman.
- More explicit documentation of sample generators and of data transformation. By Joel Nothman.
sklearn.neighbors.BallTree
andsklearn.neighbors.KDTree
used to point to empty pages stating that they are aliases of BinaryTree. This has been fixed to show the correct class docs. By Manoj Kumar.- Added silhouette plots for analysis of KMeans clustering using
metrics.silhouette_samples
andmetrics.silhouette_score
. See Selecting the number of clusters with silhouette analysis on KMeans clustering
Bug fixes¶
- Metaestimators now support ducktyping for the presence of
decision_function
,predict_proba
and other methods. This fixes behavior ofgrid_search.GridSearchCV
,grid_search.RandomizedSearchCV
,pipeline.Pipeline
,feature_selection.RFE
,feature_selection.RFECV
when nested. By Joel Nothman - The
scoring
attribute of grid-search and cross-validation methods is no longer ignored when agrid_search.GridSearchCV
is given as a base estimator or the base estimator doesn’t have predict. - The function
hierarchical.ward_tree
now returns the children in the same order for both the structured and unstructured versions. By Matteo Visconti di Oleggio Castello. feature_selection.RFECV
now correctly handles cases whenstep
is not equal to 1. By Nikolay Mayorov- The
decomposition.PCA
now undoes whitening in itsinverse_transform
. Also, itscomponents_
now always have unit length. By Michael Eickenberg. - Fix incomplete download of the dataset when
datasets.download_20newsgroups
is called. By Manoj Kumar. - Various fixes to the Gaussian processes subpackage by Vincent Dubourg and Jan Hendrik Metzen.
- Calling
partial_fit
withclass_weight=='auto'
throws an appropriate error message and suggests a work around. By Danny Sullivan. RBFSampler
withgamma=g
formerly approximatedrbf_kernel
withgamma=g/2.
; the definition ofgamma
is now consistent, which may substantially change your results if you use a fixed value. (If you cross-validated overgamma
, it probably doesn’t matter too much.) By Dougal Sutherland.- Pipeline object delegate the
classes_
attribute to the underlying estimator. It allows, for instance, to make bagging of a pipeline object. By Arnaud Joly neighbors.NearestCentroid
now uses the median as the centroid when metric is set tomanhattan
. It was using the mean before. By Manoj Kumar- Fix numerical stability issues in
linear_model.SGDClassifier
andlinear_model.SGDRegressor
by clipping large gradients and ensuring that weight decay rescaling is always positive (for large l2 regularization and large learning rate values). By Olivier Grisel - When compute_full_tree is set to “auto”, the full tree is
built when n_clusters is high and is early stopped when n_clusters is
low, while the behavior should be vice-versa in
cluster.AgglomerativeClustering
(and friends). This has been fixed By Manoj Kumar - Fix lazy centering of data in
linear_model.enet_path
andlinear_model.lasso_path
. It was centered around one. It has been changed to be centered around the origin. By Manoj Kumar - Fix handling of precomputed affinity matrices in
cluster.AgglomerativeClustering
when using connectivity constraints. By Cathy Deng - Correct
partial_fit
handling ofclass_prior
forsklearn.naive_bayes.MultinomialNB
andsklearn.naive_bayes.BernoulliNB
. By Trevor Stephens. - Fixed a crash in
metrics.precision_recall_fscore_support
when using unsortedlabels
in the multi-label setting. By Andreas Müller. - Avoid skipping the first nearest neighbor in the methods
radius_neighbors
,kneighbors
,kneighbors_graph
andradius_neighbors_graph
insklearn.neighbors.NearestNeighbors
and family, when the query data is not the same as fit data. By Manoj Kumar. - Fix log-density calculation in the
mixture.GMM
with tied covariance. By Will Dawson - Fixed a scaling error in
feature_selection.SelectFdr
where a factorn_features
was missing. By Andrew Tulloch - Fix zero division in
neighbors.KNeighborsRegressor
and related classes when using distance weighting and having identical data points. By Garret-R. - Fixed round off errors with non positive-definite covariance matrices in GMM. By Alexis Mignon.
- Fixed a error in the computation of conditional probabilities in
naive_bayes.BernoulliNB
. By Hanna Wallach. - Make the method
radius_neighbors
ofneighbors.NearestNeighbors
return the samples lying on the boundary foralgorithm='brute'
. By Yan Yi. - Flip sign of
dual_coef_
ofsvm.SVC
to make it consistent with the documentation anddecision_function
. By Artem Sobolev. - Fixed handling of ties in
isotonic.IsotonicRegression
. We now use the weighted average of targets (secondary method). By Andreas Müller and Michael Bommarito.
API changes summary¶
GridSearchCV
andcross_val_score
and other meta-estimators don’t convert pandas DataFrames into arrays any more, allowing DataFrame specific operations in custom estimators.multiclass.fit_ovr
,multiclass.predict_ovr
,predict_proba_ovr
,multiclass.fit_ovo
,multiclass.predict_ovo
,multiclass.fit_ecoc
andmulticlass.predict_ecoc
are deprecated. Use the underlying estimators instead.- Nearest neighbors estimators used to take arbitrary keyword arguments
and pass these to their distance metric. This will no longer be supported
in scikit-learn 0.18; use the
metric_params
argument instead. - n_jobs parameter of the fit method shifted to the constructor of the
- LinearRegression class.
- The
predict_proba
method ofmulticlass.OneVsRestClassifier
now returns two probabilities per sample in the multiclass case; this is consistent with other estimators and with the method’s documentation, but previous versions accidentally returned only the positive probability. Fixed by Will Lamond and Lars Buitinck. - Change default value of precompute in
ElasticNet
andLasso
to False. Setting precompute to “auto” was found to be slower when n_samples > n_features since the computation of the Gram matrix is computationally expensive and outweighs the benefit of fitting the Gram for just one alpha.precompute="auto"
is now deprecated and will be removed in 0.18 By Manoj Kumar. - Expose
positive
option inlinear_model.enet_path
andlinear_model.enet_path
which constrains coefficients to be positive. By Manoj Kumar. - Users should now supply an explicit
average
parameter tosklearn.metrics.f1_score
,sklearn.metrics.fbeta_score
,sklearn.metrics.recall_score
andsklearn.metrics.precision_score
when performing multiclass or multilabel (i.e. not binary) classification. By Joel Nothman. - scoring parameter for cross validation now accepts ‘f1_micro’, ‘f1_macro’ or ‘f1_weighted’. ‘f1’ is now for binary classification only. Similar changes apply to ‘precision’ and ‘recall’. By Joel Nothman.
- The
fit_intercept
,normalize
andreturn_models
parameters inlinear_model.enet_path
andlinear_model.lasso_path
have been removed. They were deprecated since 0.14 - From now onwards, all estimators will uniformly raise
NotFittedError
(utils.validation.NotFittedError
), when any of thepredict
like methods are called before the model is fit. By Raghav RV. - Input data validation was refactored for more consistent input
validation. The
check_arrays
function was replaced bycheck_array
andcheck_X_y
. By Andreas Müller. - Allow
X=None
in the methodsradius_neighbors
,kneighbors
,kneighbors_graph
andradius_neighbors_graph
insklearn.neighbors.NearestNeighbors
and family. If set to None, then for every sample this avoids setting the sample itself as the first nearest neighbor. By Manoj Kumar. - Add parameter
include_self
inneighbors.kneighbors_graph
andneighbors.radius_neighbors_graph
which has to be explicitly set by the user. If set to True, then the sample itself is considered as the first nearest neighbor. - thresh parameter is deprecated in favor of new tol parameter in
GMM
,DPGMM
andVBGMM
. See Enhancements section for details. By Hervé Bredin. - Estimators will treat input with dtype object as numeric when possible. By Andreas Müller
- Estimators now raise ValueError consistently when fitted on empty data (less than 1 sample or less than 1 feature for 2D input). By Olivier Grisel.
- The
shuffle
option oflinear_model.SGDClassifier
,linear_model.SGDRegressor
,linear_model.Perceptron
,linear_model.PassiveAgressiveClassifier
andlinear_model.PassiveAgressiveRegressor
now defaults toTrue
. cluster.DBSCAN
now uses a deterministic initialization. The random_state parameter is deprecated. By Erich Schubert.
Code Contributors¶
A. Flaxman, Aaron Schumacher, Aaron Staple, abhishek thakur, Akshay, akshayah3, Aldrian Obaja, Alexander Fabisch, Alexandre Gramfort, Alexis Mignon, Anders Aagaard, Andreas Mueller, Andreas van Cranenburgh, Andrew Tulloch, Andrew Walker, Antony Lee, Arnaud Joly, banilo, Barmaley.exe, Ben Davies, Benedikt Koehler, bhsu, Boris Feld, Borja Ayerdi, Boyuan Deng, Brent Pedersen, Brian Wignall, Brooke Osborn, Calvin Giles, Cathy Deng, Celeo, cgohlke, chebee7i, Christian Stade-Schuldt, Christof Angermueller, Chyi-Kwei Yau, CJ Carey, Clemens Brunner, Daiki Aminaka, Dan Blanchard, danfrankj, Danny Sullivan, David Fletcher, Dmitrijs Milajevs, Dougal J. Sutherland, Erich Schubert, Fabian Pedregosa, Florian Wilhelm, floydsoft, Félix-Antoine Fortin, Gael Varoquaux, Garrett-R, Gilles Louppe, gpassino, gwulfs, Hampus Bengtsson, Hamzeh Alsalhi, Hanna Wallach, Harry Mavroforakis, Hasil Sharma, Helder, Herve Bredin, Hsiang-Fu Yu, Hugues SALAMIN, Ian Gilmore, Ilambharathi Kanniah, Imran Haque, isms, Jake VanderPlas, Jan Dlabal, Jan Hendrik Metzen, Jatin Shah, Javier López Peña, jdcaballero, Jean Kossaifi, Jeff Hammerbacher, Joel Nothman, Jonathan Helmus, Joseph, Kaicheng Zhang, Kevin Markham, Kyle Beauchamp, Kyle Kastner, Lagacherie Matthieu, Lars Buitinck, Laurent Direr, leepei, Loic Esteve, Luis Pedro Coelho, Lukas Michelbacher, maheshakya, Manoj Kumar, Manuel, Mario Michael Krell, Martin, Martin Billinger, Martin Ku, Mateusz Susik, Mathieu Blondel, Matt Pico, Matt Terry, Matteo Visconti dOC, Matti Lyra, Max Linke, Mehdi Cherti, Michael Bommarito, Michael Eickenberg, Michal Romaniuk, MLG, mr.Shu, Nelle Varoquaux, Nicola Montecchio, Nicolas, Nikolay Mayorov, Noel Dawe, Okal Billy, Olivier Grisel, Óscar Nájera, Paolo Puggioni, Peter Prettenhofer, Pratap Vardhan, pvnguyen, queqichao, Rafael Carrascosa, Raghav R V, Rahiel Kasim, Randall Mason, Rob Zinkov, Robert Bradshaw, Saket Choudhary, Sam Nicholls, Samuel Charron, Saurabh Jha, sethdandridge, sinhrks, snuderl, Stefan Otte, Stefan van der Walt, Steve Tjoa, swu, Sylvain Zimmer, tejesh95, terrycojones, Thomas Delteil, Thomas Unterthiner, Tomas Kazmar, trevorstephens, tttthomasssss, Tzu-Ming Kuo, ugurcaliskan, ugurthemaster, Vinayak Mehta, Vincent Dubourg, Vjacheslav Murashkin, Vlad Niculae, wadawson, Wei Xue, Will Lamond, Wu Jiang, x0l, Xinfan Meng, Yan Yi, Yu-Chin
Version 0.15.2¶
September 4, 2014
Bug fixes¶
- Fixed handling of the
p
parameter of the Minkowski distance that was previously ignored in nearest neighbors models. By Nikolay Mayorov. - Fixed duplicated alphas in
linear_model.LassoLars
with early stopping on 32 bit Python. By Olivier Grisel and Fabian Pedregosa. - Fixed the build under Windows when scikit-learn is built with MSVC while NumPy is built with MinGW. By Olivier Grisel and Federico Vaggi.
- Fixed an array index overflow bug in the coordinate descent solver. By Gael Varoquaux.
- Better handling of numpy 1.9 deprecation warnings. By Gael Varoquaux.
- Removed unnecessary data copy in
cluster.KMeans
. By Gael Varoquaux. - Explicitly close open files to avoid
ResourceWarnings
under Python 3. By Calvin Giles. - The
transform
ofdiscriminant_analysis.LinearDiscriminantAnalysis
now projects the input on the most discriminant directions. By Martin Billinger. - Fixed potential overflow in
_tree.safe_realloc
by Lars Buitinck. - Performance optimization in
isotonic.IsotonicRegression
. By Robert Bradshaw. nose
is non-longer a runtime dependency to importsklearn
, only for running the tests. By Joel Nothman.- Many documentation and website fixes by Joel Nothman, Lars Buitinck Matt Pico, and others.
Version 0.15.1¶
August 1, 2014
Bug fixes¶
- Made
cross_validation.cross_val_score
usecross_validation.KFold
instead ofcross_validation.StratifiedKFold
on multi-output classification problems. By Nikolay Mayorov. - Support unseen labels
preprocessing.LabelBinarizer
to restore the default behavior of 0.14.1 for backward compatibility. By Hamzeh Alsalhi. - Fixed the
cluster.KMeans
stopping criterion that prevented early convergence detection. By Edward Raff and Gael Varoquaux. - Fixed the behavior of
multiclass.OneVsOneClassifier
. in case of ties at the per-class vote level by computing the correct per-class sum of prediction scores. By Andreas Müller. - Made
cross_validation.cross_val_score
andgrid_search.GridSearchCV
accept Python lists as input data. This is especially useful for cross-validation and model selection of text processing pipelines. By Andreas Müller. - Fixed data input checks of most estimators to accept input data that
implements the NumPy
__array__
protocol. This is the case for forpandas.Series
andpandas.DataFrame
in recent versions of pandas. By Gael Varoquaux. - Fixed a regression for
linear_model.SGDClassifier
withclass_weight="auto"
on data with non-contiguous labels. By Olivier Grisel.
Version 0.15¶
July 15, 2014
Highlights¶
- Many speed and memory improvements all across the code
- Huge speed and memory improvements to random forests (and extra trees) that also benefit better from parallel computing.
- Incremental fit to
BernoulliRBM
- Added
cluster.AgglomerativeClustering
for hierarchical agglomerative clustering with average linkage, complete linkage and ward strategies. - Added
linear_model.RANSACRegressor
for robust regression models. - Added dimensionality reduction with
manifold.TSNE
which can be used to visualize high-dimensional data.
Changelog¶
New features¶
- Added
ensemble.BaggingClassifier
andensemble.BaggingRegressor
meta-estimators for ensembling any kind of base estimator. See the Bagging section of the user guide for details and examples. By Gilles Louppe. - New unsupervised feature selection algorithm
feature_selection.VarianceThreshold
, by Lars Buitinck. - Added
linear_model.RANSACRegressor
meta-estimator for the robust fitting of regression models. By Johannes Schönberger. - Added
cluster.AgglomerativeClustering
for hierarchical agglomerative clustering with average linkage, complete linkage and ward strategies, by Nelle Varoquaux and Gael Varoquaux. - Shorthand constructors
pipeline.make_pipeline
andpipeline.make_union
were added by Lars Buitinck. - Shuffle option for
cross_validation.StratifiedKFold
. By Jeffrey Blackburne. - Incremental learning (
partial_fit
) for Gaussian Naive Bayes by Imran Haque. - Added
partial_fit
toBernoulliRBM
By Danny Sullivan. - Added
learning_curve
utility to chart performance with respect to training size. See Plotting Learning Curves. By Alexander Fabisch. - Add positive option in
LassoCV
andElasticNetCV
. By Brian Wignall and Alexandre Gramfort. - Added
linear_model.MultiTaskElasticNetCV
andlinear_model.MultiTaskLassoCV
. By Manoj Kumar. - Added
manifold.TSNE
. By Alexander Fabisch.
Enhancements¶
- Add sparse input support to
ensemble.AdaBoostClassifier
andensemble.AdaBoostRegressor
meta-estimators. By Hamzeh Alsalhi. - Memory improvements of decision trees, by Arnaud Joly.
- Decision trees can now be built in best-first manner by using
max_leaf_nodes
as the stopping criteria. Refactored the tree code to use either a stack or a priority queue for tree building. By Peter Prettenhofer and Gilles Louppe. - Decision trees can now be fitted on fortran- and c-style arrays, and
non-continuous arrays without the need to make a copy.
If the input array has a different dtype than
np.float32
, a fortran- style copy will be made since fortran-style memory layout has speed advantages. By Peter Prettenhofer and Gilles Louppe. - Speed improvement of regression trees by optimizing the the computation of the mean square error criterion. This lead to speed improvement of the tree, forest and gradient boosting tree modules. By Arnaud Joly
- The
img_to_graph
andgrid_tograph
functions insklearn.feature_extraction.image
now returnnp.ndarray
instead ofnp.matrix
whenreturn_as=np.ndarray
. See the Notes section for more information on compatibility. - Changed the internal storage of decision trees to use a struct array. This fixed some small bugs, while improving code and providing a small speed gain. By Joel Nothman.
- Reduce memory usage and overhead when fitting and predicting with forests
of randomized trees in parallel with
n_jobs != 1
by leveraging new threading backend of joblib 0.8 and releasing the GIL in the tree fitting Cython code. By Olivier Grisel and Gilles Louppe. - Speed improvement of the
sklearn.ensemble.gradient_boosting
module. By Gilles Louppe and Peter Prettenhofer. - Various enhancements to the
sklearn.ensemble.gradient_boosting
module: awarm_start
argument to fit additional trees, amax_leaf_nodes
argument to fit GBM style trees, amonitor
fit argument to inspect the estimator during training, and refactoring of the verbose code. By Peter Prettenhofer. - Faster
sklearn.ensemble.ExtraTrees
by caching feature values. By Arnaud Joly. - Faster depth-based tree building algorithm such as decision tree, random forest, extra trees or gradient tree boosting (with depth based growing strategy) by avoiding trying to split on found constant features in the sample subset. By Arnaud Joly.
- Add
min_weight_fraction_leaf
pre-pruning parameter to tree-based methods: the minimum weighted fraction of the input samples required to be at a leaf node. By Noel Dawe. - Added
metrics.pairwise_distances_argmin_min
, by Philippe Gervais. - Added predict method to
cluster.AffinityPropagation
andcluster.MeanShift
, by Mathieu Blondel. - Vector and matrix multiplications have been optimised throughout the library by Denis Engemann, and Alexandre Gramfort. In particular, they should take less memory with older NumPy versions (prior to 1.7.2).
- Precision-recall and ROC examples now use train_test_split, and have more explanation of why these metrics are useful. By Kyle Kastner
- The training algorithm for
decomposition.NMF
is faster for sparse matrices and has much lower memory complexity, meaning it will scale up gracefully to large datasets. By Lars Buitinck. - Added svd_method option with default value to “randomized” to
decomposition.FactorAnalysis
to save memory and significantly speedup computation by Denis Engemann, and Alexandre Gramfort. - Changed
cross_validation.StratifiedKFold
to try and preserve as much of the original ordering of samples as possible so as not to hide overfitting on datasets with a non-negligible level of samples dependency. By Daniel Nouri and Olivier Grisel. - Add multi-output support to
gaussian_process.GaussianProcess
by John Novak. - Support for precomputed distance matrices in nearest neighbor estimators by Robert Layton and Joel Nothman.
- Norm computations optimized for NumPy 1.6 and later versions by Lars Buitinck. In particular, the k-means algorithm no longer needs a temporary data structure the size of its input.
dummy.DummyClassifier
can now be used to predict a constant output value. By Manoj Kumar.dummy.DummyRegressor
has now a strategy parameter which allows to predict the mean, the median of the training set or a constant output value. By Maheshakya Wijewardena.- Multi-label classification output in multilabel indicator format
is now supported by
metrics.roc_auc_score
andmetrics.average_precision_score
by Arnaud Joly. - Significant performance improvements (more than 100x speedup for
large problems) in
isotonic.IsotonicRegression
by Andrew Tulloch. - Speed and memory usage improvements to the SGD algorithm for linear
models: it now uses threads, not separate processes, when
n_jobs>1
. By Lars Buitinck. - Grid search and cross validation allow NaNs in the input arrays so that
preprocessors such as
preprocessing.Imputer
can be trained within the cross validation loop, avoiding potentially skewed results. - Ridge regression can now deal with sample weights in feature space (only sample space until then). By Michael Eickenberg. Both solutions are provided by the Cholesky solver.
- Several classification and regression metrics now support weighted
samples with the new
sample_weight
argument:metrics.accuracy_score
,metrics.zero_one_loss
,metrics.precision_score
,metrics.average_precision_score
,metrics.f1_score
,metrics.fbeta_score
,metrics.recall_score
,metrics.roc_auc_score
,metrics.explained_variance_score
,metrics.mean_squared_error
,metrics.mean_absolute_error
,metrics.r2_score
. By Noel Dawe. - Speed up of the sample generator
datasets.make_multilabel_classification
. By Joel Nothman.
Documentation improvements¶
- The Working With Text Data tutorial has now been worked in to the main documentation’s tutorial section. Includes exercises and skeletons for tutorial presentation. Original tutorial created by several authors including Olivier Grisel, Lars Buitinck and many others. Tutorial integration into the scikit-learn documentation by Jaques Grobler
- Added Computational Performance documentation. Discussion and examples of prediction latency / throughput and different factors that have influence over speed. Additional tips for building faster models and choosing a relevant compromise between speed and predictive power. By Eustache Diemert.
Bug fixes¶
- Fixed bug in
decomposition.MiniBatchDictionaryLearning
:partial_fit
was not working properly. - Fixed bug in
linear_model.stochastic_gradient
:l1_ratio
was used as(1.0 - l1_ratio)
. - Fixed bug in
multiclass.OneVsOneClassifier
with string labels - Fixed a bug in
LassoCV
andElasticNetCV
: they would not pre-compute the Gram matrix withprecompute=True
orprecompute="auto"
andn_samples > n_features
. By Manoj Kumar. - Fixed incorrect estimation of the degrees of freedom in
feature_selection.f_regression
when variates are not centered. By Virgile Fritsch. - Fixed a race condition in parallel processing with
pre_dispatch != "all"
(for instance, incross_val_score
). By Olivier Grisel. - Raise error in
cluster.FeatureAgglomeration
andcluster.WardAgglomeration
when no samples are given, rather than returning meaningless clustering. - Fixed bug in
gradient_boosting.GradientBoostingRegressor
withloss='huber'
:gamma
might have not been initialized. - Fixed feature importances as computed with a forest of randomized trees
when fit with
sample_weight != None
and/or withbootstrap=True
. By Gilles Louppe.
API changes summary¶
sklearn.hmm
is deprecated. Its removal is planned for the 0.17 release.- Use of
covariance.EllipticEnvelop
has now been removed after deprecation. Please usecovariance.EllipticEnvelope
instead. cluster.Ward
is deprecated. Usecluster.AgglomerativeClustering
instead.cluster.WardClustering
is deprecated. Usecluster.AgglomerativeClustering
instead.cross_validation.Bootstrap
is deprecated.cross_validation.KFold
orcross_validation.ShuffleSplit
are recommended instead.- Direct support for the sequence of sequences (or list of lists) multilabel
format is deprecated. To convert to and from the supported binary
indicator matrix format, use
MultiLabelBinarizer
. By Joel Nothman. - Add score method to
PCA
following the model of probabilistic PCA and deprecateProbabilisticPCA
model whose score implementation is not correct. The computation now also exploits the matrix inversion lemma for faster computation. By Alexandre Gramfort. - The score method of
FactorAnalysis
now returns the average log-likelihood of the samples. Use score_samples to get log-likelihood of each sample. By Alexandre Gramfort. - Generating boolean masks (the setting
indices=False
) from cross-validation generators is deprecated. Support for masks will be removed in 0.17. The generators have produced arrays of indices by default since 0.10. By Joel Nothman. - 1-d arrays containing strings with
dtype=object
(as used in Pandas) are now considered valid classification targets. This fixes a regression from version 0.13 in some classifiers. By Joel Nothman. - Fix wrong
explained_variance_ratio_
attribute inRandomizedPCA
. By Alexandre Gramfort. - Fit alphas for each
l1_ratio
instead ofmean_l1_ratio
inlinear_model.ElasticNetCV
andlinear_model.LassoCV
. This changes the shape ofalphas_
from(n_alphas,)
to(n_l1_ratio, n_alphas)
if thel1_ratio
provided is a 1-D array like object of length greater than one. By Manoj Kumar. - Fix
linear_model.ElasticNetCV
andlinear_model.LassoCV
when fitting intercept and input data is sparse. The automatic grid of alphas was not computed correctly and the scaling with normalize was wrong. By Manoj Kumar. - Fix wrong maximal number of features drawn (
max_features
) at each split for decision trees, random forests and gradient tree boosting. Previously, the count for the number of drawn features started only after one non constant features in the split. This bug fix will affect computational and generalization performance of those algorithms in the presence of constant features. To get back previous generalization performance, you should modify the value ofmax_features
. By Arnaud Joly. - Fix wrong maximal number of features drawn (
max_features
) at each split forensemble.ExtraTreesClassifier
andensemble.ExtraTreesRegressor
. Previously, only non constant features in the split was counted as drawn. Now constant features are counted as drawn. Furthermore at least one feature must be non constant in order to make a valid split. This bug fix will affect computational and generalization performance of extra trees in the presence of constant features. To get back previous generalization performance, you should modify the value ofmax_features
. By Arnaud Joly. - Fix
utils.compute_class_weight
whenclass_weight=="auto"
. Previously it was broken for input of non-integerdtype
and the weighted array that was returned was wrong. By Manoj Kumar. - Fix
cross_validation.Bootstrap
to returnValueError
whenn_train + n_test > n
. By Ronald Phlypo.
People¶
List of contributors for release 0.15 by number of commits.
- 312 Olivier Grisel
- 275 Lars Buitinck
- 221 Gael Varoquaux
- 148 Arnaud Joly
- 134 Johannes Schönberger
- 119 Gilles Louppe
- 113 Joel Nothman
- 111 Alexandre Gramfort
- 95 Jaques Grobler
- 89 Denis Engemann
- 83 Peter Prettenhofer
- 83 Alexander Fabisch
- 62 Mathieu Blondel
- 60 Eustache Diemert
- 60 Nelle Varoquaux
- 49 Michael Bommarito
- 45 Manoj-Kumar-S
- 28 Kyle Kastner
- 26 Andreas Mueller
- 22 Noel Dawe
- 21 Maheshakya Wijewardena
- 21 Brooke Osborn
- 21 Hamzeh Alsalhi
- 21 Jake VanderPlas
- 21 Philippe Gervais
- 19 Bala Subrahmanyam Varanasi
- 12 Ronald Phlypo
- 10 Mikhail Korobov
- 8 Thomas Unterthiner
- 8 Jeffrey Blackburne
- 8 eltermann
- 8 bwignall
- 7 Ankit Agrawal
- 7 CJ Carey
- 6 Daniel Nouri
- 6 Chen Liu
- 6 Michael Eickenberg
- 6 ugurthemaster
- 5 Aaron Schumacher
- 5 Baptiste Lagarde
- 5 Rajat Khanduja
- 5 Robert McGibbon
- 5 Sergio Pascual
- 4 Alexis Metaireau
- 4 Ignacio Rossi
- 4 Virgile Fritsch
- 4 Sebastian Säger
- 4 Ilambharathi Kanniah
- 4 sdenton4
- 4 Robert Layton
- 4 Alyssa
- 4 Amos Waterland
- 3 Andrew Tulloch
- 3 murad
- 3 Steven Maude
- 3 Karol Pysniak
- 3 Jacques Kvam
- 3 cgohlke
- 3 cjlin
- 3 Michael Becker
- 3 hamzeh
- 3 Eric Jacobsen
- 3 john collins
- 3 kaushik94
- 3 Erwin Marsi
- 2 csytracy
- 2 LK
- 2 Vlad Niculae
- 2 Laurent Direr
- 2 Erik Shilts
- 2 Raul Garreta
- 2 Yoshiki Vázquez Baeza
- 2 Yung Siang Liau
- 2 abhishek thakur
- 2 James Yu
- 2 Rohit Sivaprasad
- 2 Roland Szabo
- 2 amormachine
- 2 Alexis Mignon
- 2 Oscar Carlsson
- 2 Nantas Nardelli
- 2 jess010
- 2 kowalski87
- 2 Andrew Clegg
- 2 Federico Vaggi
- 2 Simon Frid
- 2 Félix-Antoine Fortin
- 1 Ralf Gommers
- 1 t-aft
- 1 Ronan Amicel
- 1 Rupesh Kumar Srivastava
- 1 Ryan Wang
- 1 Samuel Charron
- 1 Samuel St-Jean
- 1 Fabian Pedregosa
- 1 Skipper Seabold
- 1 Stefan Walk
- 1 Stefan van der Walt
- 1 Stephan Hoyer
- 1 Allen Riddell
- 1 Valentin Haenel
- 1 Vijay Ramesh
- 1 Will Myers
- 1 Yaroslav Halchenko
- 1 Yoni Ben-Meshulam
- 1 Yury V. Zaytsev
- 1 adrinjalali
- 1 ai8rahim
- 1 alemagnani
- 1 alex
- 1 benjamin wilson
- 1 chalmerlowe
- 1 dzikie drożdże
- 1 jamestwebber
- 1 matrixorz
- 1 popo
- 1 samuela
- 1 François Boulogne
- 1 Alexander Measure
- 1 Ethan White
- 1 Guilherme Trein
- 1 Hendrik Heuer
- 1 IvicaJovic
- 1 Jan Hendrik Metzen
- 1 Jean Michel Rouly
- 1 Eduardo Ariño de la Rubia
- 1 Jelle Zijlstra
- 1 Eddy L O Jansson
- 1 Denis
- 1 John
- 1 John Schmidt
- 1 Jorge Cañardo Alastuey
- 1 Joseph Perla
- 1 Joshua Vredevoogd
- 1 José Ricardo
- 1 Julien Miotte
- 1 Kemal Eren
- 1 Kenta Sato
- 1 David Cournapeau
- 1 Kyle Kelley
- 1 Daniele Medri
- 1 Laurent Luce
- 1 Laurent Pierron
- 1 Luis Pedro Coelho
- 1 DanielWeitzenfeld
- 1 Craig Thompson
- 1 Chyi-Kwei Yau
- 1 Matthew Brett
- 1 Matthias Feurer
- 1 Max Linke
- 1 Chris Filo Gorgolewski
- 1 Charles Earl
- 1 Michael Hanke
- 1 Michele Orrù
- 1 Bryan Lunt
- 1 Brian Kearns
- 1 Paul Butler
- 1 Paweł Mandera
- 1 Peter
- 1 Andrew Ash
- 1 Pietro Zambelli
- 1 staubda
Version 0.14¶
August 7, 2013
Changelog¶
- Missing values with sparse and dense matrices can be imputed with the
transformer
preprocessing.Imputer
by Nicolas Trésegnie. - The core implementation of decisions trees has been rewritten from scratch, allowing for faster tree induction and lower memory consumption in all tree-based estimators. By Gilles Louppe.
- Added
ensemble.AdaBoostClassifier
andensemble.AdaBoostRegressor
, by Noel Dawe and Gilles Louppe. See the AdaBoost section of the user guide for details and examples. - Added
grid_search.RandomizedSearchCV
andgrid_search.ParameterSampler
for randomized hyperparameter optimization. By Andreas Müller. - Added biclustering algorithms
(
sklearn.cluster.bicluster.SpectralCoclustering
andsklearn.cluster.bicluster.SpectralBiclustering
), data generation methods (sklearn.datasets.make_biclusters
andsklearn.datasets.make_checkerboard
), and scoring metrics (sklearn.metrics.consensus_score
). By Kemal Eren. - Added Restricted Boltzmann Machines
(
neural_network.BernoulliRBM
). By Yann Dauphin. - Python 3 support by Justin Vincent, Lars Buitinck, Subhodeep Moitra and Olivier Grisel. All tests now pass under Python 3.3.
- Ability to pass one penalty (alpha value) per target in
linear_model.Ridge
, by @eickenberg and Mathieu Blondel. - Fixed
sklearn.linear_model.stochastic_gradient.py
L2 regularization issue (minor practical significance). By Norbert Crombach and Mathieu Blondel . - Added an interactive version of Andreas Müller’s Machine Learning Cheat Sheet (for scikit-learn) to the documentation. See Choosing the right estimator. By Jaques Grobler.
grid_search.GridSearchCV
andcross_validation.cross_val_score
now support the use of advanced scoring function such as area under the ROC curve and f-beta scores. See The scoring parameter: defining model evaluation rules for details. By Andreas Müller and Lars Buitinck. Passing a function fromsklearn.metrics
asscore_func
is deprecated.- Multi-label classification output is now supported by
metrics.accuracy_score
,metrics.zero_one_loss
,metrics.f1_score
,metrics.fbeta_score
,metrics.classification_report
,metrics.precision_score
andmetrics.recall_score
by Arnaud Joly. - Two new metrics
metrics.hamming_loss
andmetrics.jaccard_similarity_score
are added with multi-label support by Arnaud Joly. - Speed and memory usage improvements in
feature_extraction.text.CountVectorizer
andfeature_extraction.text.TfidfVectorizer
, by Jochen Wersdörfer and Roman Sinayev. - The
min_df
parameter infeature_extraction.text.CountVectorizer
andfeature_extraction.text.TfidfVectorizer
, which used to be 2, has been reset to 1 to avoid unpleasant surprises (empty vocabularies) for novice users who try it out on tiny document collections. A value of at least 2 is still recommended for practical use. svm.LinearSVC
,linear_model.SGDClassifier
andlinear_model.SGDRegressor
now have asparsify
method that converts theircoef_
into a sparse matrix, meaning stored models trained using these estimators can be made much more compact.linear_model.SGDClassifier
now produces multiclass probability estimates when trained under log loss or modified Huber loss.- Hyperlinks to documentation in example code on the website by Martin Luessi.
- Fixed bug in
preprocessing.MinMaxScaler
causing incorrect scaling of the features for non-defaultfeature_range
settings. By Andreas Müller. max_features
intree.DecisionTreeClassifier
,tree.DecisionTreeRegressor
and all derived ensemble estimators now supports percentage values. By Gilles Louppe.- Performance improvements in
isotonic.IsotonicRegression
by Nelle Varoquaux. metrics.accuracy_score
has an option normalize to return the fraction or the number of correctly classified sample by Arnaud Joly.- Added
metrics.log_loss
that computes log loss, aka cross-entropy loss. By Jochen Wersdörfer and Lars Buitinck. - A bug that caused
ensemble.AdaBoostClassifier
’s to output incorrect probabilities has been fixed. - Feature selectors now share a mixin providing consistent
transform
,inverse_transform
andget_support
methods. By Joel Nothman. - A fitted
grid_search.GridSearchCV
orgrid_search.RandomizedSearchCV
can now generally be pickled. By Joel Nothman. - Refactored and vectorized implementation of
metrics.roc_curve
andmetrics.precision_recall_curve
. By Joel Nothman. - The new estimator
sklearn.decomposition.TruncatedSVD
performs dimensionality reduction using SVD on sparse matrices, and can be used for latent semantic analysis (LSA). By Lars Buitinck. - Added self-contained example of out-of-core learning on text data Out-of-core classification of text documents. By Eustache Diemert.
- The default number of components for
sklearn.decomposition.RandomizedPCA
is now correctly documented to ben_features
. This was the default behavior, so programs using it will continue to work as they did. sklearn.cluster.KMeans
now fits several orders of magnitude faster on sparse data (the speedup depends on the sparsity). By Lars Buitinck.- Reduce memory footprint of FastICA by Denis Engemann and Alexandre Gramfort.
- Verbose output in
sklearn.ensemble.gradient_boosting
now uses a column format and prints progress in decreasing frequency. It also shows the remaining time. By Peter Prettenhofer. sklearn.ensemble.gradient_boosting
provides out-of-bag improvementoob_improvement_
rather than the OOB score for model selection. An example that shows how to use OOB estimates to select the number of trees was added. By Peter Prettenhofer.- Most metrics now support string labels for multiclass classification by Arnaud Joly and Lars Buitinck.
- New OrthogonalMatchingPursuitCV class by Alexandre Gramfort and Vlad Niculae.
- Fixed a bug in
sklearn.covariance.GraphLassoCV
: the ‘alphas’ parameter now works as expected when given a list of values. By Philippe Gervais. - Fixed an important bug in
sklearn.covariance.GraphLassoCV
that prevented all folds provided by a CV object to be used (only the first 3 were used). When providing a CV object, execution time may thus increase significantly compared to the previous version (bug results are correct now). By Philippe Gervais. cross_validation.cross_val_score
and thegrid_search
module is now tested with multi-output data by Arnaud Joly.datasets.make_multilabel_classification
can now return the output in label indicator multilabel format by Arnaud Joly.- K-nearest neighbors,
neighbors.KNeighborsRegressor
andneighbors.RadiusNeighborsRegressor
, and radius neighbors,neighbors.RadiusNeighborsRegressor
andneighbors.RadiusNeighborsClassifier
support multioutput data by Arnaud Joly. - Random state in LibSVM-based estimators (
svm.SVC
,NuSVC
,OneClassSVM
,svm.SVR
,svm.NuSVR
) can now be controlled. This is useful to ensure consistency in the probability estimates for the classifiers trained withprobability=True
. By Vlad Niculae. - Out-of-core learning support for discrete naive Bayes classifiers
sklearn.naive_bayes.MultinomialNB
andsklearn.naive_bayes.BernoulliNB
by adding thepartial_fit
method by Olivier Grisel. - New website design and navigation by Gilles Louppe, Nelle Varoquaux, Vincent Michel and Andreas Müller.
- Improved documentation on multi-class, multi-label and multi-output classification by Yannick Schwartz and Arnaud Joly.
- Better input and error handling in the
metrics
module by Arnaud Joly and Joel Nothman. - Speed optimization of the
hmm
module by Mikhail Korobov - Significant speed improvements for
sklearn.cluster.DBSCAN
by cleverless
API changes summary¶
- The
auc_score
was renamedroc_auc_score
. - Testing scikit-learn with
sklearn.test()
is deprecated. Usenosetests sklearn
from the command line. - Feature importances in
tree.DecisionTreeClassifier
,tree.DecisionTreeRegressor
and all derived ensemble estimators are now computed on the fly when accessing thefeature_importances_
attribute. Settingcompute_importances=True
is no longer required. By Gilles Louppe. linear_model.lasso_path
andlinear_model.enet_path
can return its results in the same format as that oflinear_model.lars_path
. This is done by setting thereturn_models
parameter toFalse
. By Jaques Grobler and Alexandre Gramfortgrid_search.IterGrid
was renamed togrid_search.ParameterGrid
.- Fixed bug in
KFold
causing imperfect class balance in some cases. By Alexandre Gramfort and Tadej Janež. sklearn.neighbors.BallTree
has been refactored, and asklearn.neighbors.KDTree
has been added which shares the same interface. The Ball Tree now works with a wide variety of distance metrics. Both classes have many new methods, including single-tree and dual-tree queries, breadth-first and depth-first searching, and more advanced queries such as kernel density estimation and 2-point correlation functions. By Jake Vanderplas- Support for scipy.spatial.cKDTree within neighbors queries has been
removed, and the functionality replaced with the new
KDTree
class. sklearn.neighbors.KernelDensity
has been added, which performs efficient kernel density estimation with a variety of kernels.sklearn.decomposition.KernelPCA
now always returns output withn_components
components, unless the new parameterremove_zero_eig
is set toTrue
. This new behavior is consistent with the way kernel PCA was always documented; previously, the removal of components with zero eigenvalues was tacitly performed on all data.gcv_mode="auto"
no longer tries to perform SVD on a densified sparse matrix insklearn.linear_model.RidgeCV
.- Sparse matrix support in
sklearn.decomposition.RandomizedPCA
is now deprecated in favor of the newTruncatedSVD
. cross_validation.KFold
andcross_validation.StratifiedKFold
now enforce n_folds >= 2 otherwise aValueError
is raised. By Olivier Grisel.datasets.load_files
’scharset
andcharset_errors
parameters were renamedencoding
anddecode_errors
.- Attribute
oob_score_
insklearn.ensemble.GradientBoostingRegressor
andsklearn.ensemble.GradientBoostingClassifier
is deprecated and has been replaced byoob_improvement_
. - Attributes in OrthogonalMatchingPursuit have been deprecated (copy_X, Gram, …) and precompute_gram renamed precompute for consistency. See #2224.
sklearn.preprocessing.StandardScaler
now converts integer input to float, and raises a warning. Previously it rounded for dense integer input.sklearn.multiclass.OneVsRestClassifier
now has adecision_function
method. This will return the distance of each sample from the decision boundary for each class, as long as the underlying estimators implement thedecision_function
method. By Kyle Kastner.- Better input validation, warning on unexpected shapes for y.
People¶
List of contributors for release 0.14 by number of commits.
- 277 Gilles Louppe
- 245 Lars Buitinck
- 187 Andreas Mueller
- 124 Arnaud Joly
- 112 Jaques Grobler
- 109 Gael Varoquaux
- 107 Olivier Grisel
- 102 Noel Dawe
- 99 Kemal Eren
- 79 Joel Nothman
- 75 Jake VanderPlas
- 73 Nelle Varoquaux
- 71 Vlad Niculae
- 65 Peter Prettenhofer
- 64 Alexandre Gramfort
- 54 Mathieu Blondel
- 38 Nicolas Trésegnie
- 35 eustache
- 27 Denis Engemann
- 25 Yann N. Dauphin
- 19 Justin Vincent
- 17 Robert Layton
- 15 Doug Coleman
- 14 Michael Eickenberg
- 13 Robert Marchman
- 11 Fabian Pedregosa
- 11 Philippe Gervais
- 10 Jim Holmström
- 10 Tadej Janež
- 10 syhw
- 9 Mikhail Korobov
- 9 Steven De Gryze
- 8 sergeyf
- 7 Ben Root
- 7 Hrishikesh Huilgolkar
- 6 Kyle Kastner
- 6 Martin Luessi
- 6 Rob Speer
- 5 Federico Vaggi
- 5 Raul Garreta
- 5 Rob Zinkov
- 4 Ken Geis
- 3 A. Flaxman
- 3 Denton Cockburn
- 3 Dougal Sutherland
- 3 Ian Ozsvald
- 3 Johannes Schönberger
- 3 Robert McGibbon
- 3 Roman Sinayev
- 3 Szabo Roland
- 2 Diego Molla
- 2 Imran Haque
- 2 Jochen Wersdörfer
- 2 Sergey Karayev
- 2 Yannick Schwartz
- 2 jamestwebber
- 1 Abhijeet Kolhe
- 1 Alexander Fabisch
- 1 Bastiaan van den Berg
- 1 Benjamin Peterson
- 1 Daniel Velkov
- 1 Fazlul Shahriar
- 1 Felix Brockherde
- 1 Félix-Antoine Fortin
- 1 Harikrishnan S
- 1 Jack Hale
- 1 JakeMick
- 1 James McDermott
- 1 John Benediktsson
- 1 John Zwinck
- 1 Joshua Vredevoogd
- 1 Justin Pati
- 1 Kevin Hughes
- 1 Kyle Kelley
- 1 Matthias Ekman
- 1 Miroslav Shubernetskiy
- 1 Naoki Orii
- 1 Norbert Crombach
- 1 Rafael Cunha de Almeida
- 1 Rolando Espinoza La fuente
- 1 Seamus Abshere
- 1 Sergey Feldman
- 1 Sergio Medina
- 1 Stefano Lattarini
- 1 Steve Koch
- 1 Sturla Molden
- 1 Thomas Jarosch
- 1 Yaroslav Halchenko
Version 0.13.1¶
February 23, 2013
The 0.13.1 release only fixes some bugs and does not add any new functionality.
Changelog¶
- Fixed a testing error caused by the function
cross_validation.train_test_split
being interpreted as a test by Yaroslav Halchenko. - Fixed a bug in the reassignment of small clusters in the
cluster.MiniBatchKMeans
by Gael Varoquaux. - Fixed default value of
gamma
indecomposition.KernelPCA
by Lars Buitinck. - Updated joblib to
0.7.0d
by Gael Varoquaux. - Fixed scaling of the deviance in
ensemble.GradientBoostingClassifier
by Peter Prettenhofer. - Better tie-breaking in
multiclass.OneVsOneClassifier
by Andreas Müller. - Other small improvements to tests and documentation.
People¶
- List of contributors for release 0.13.1 by number of commits.
- 16 Lars Buitinck
- 12 Andreas Müller
- 8 Gael Varoquaux
- 5 Robert Marchman
- 3 Peter Prettenhofer
- 2 Hrishikesh Huilgolkar
- 1 Bastiaan van den Berg
- 1 Diego Molla
- 1 Gilles Louppe
- 1 Mathieu Blondel
- 1 Nelle Varoquaux
- 1 Rafael Cunha de Almeida
- 1 Rolando Espinoza La fuente
- 1 Vlad Niculae
- 1 Yaroslav Halchenko
Version 0.13¶
January 21, 2013
New Estimator Classes¶
dummy.DummyClassifier
anddummy.DummyRegressor
, two data-independent predictors by Mathieu Blondel. Useful to sanity-check your estimators. See Dummy estimators in the user guide. Multioutput support added by Arnaud Joly.decomposition.FactorAnalysis
, a transformer implementing the classical factor analysis, by Christian Osendorfer and Alexandre Gramfort. See Factor Analysis in the user guide.feature_extraction.FeatureHasher
, a transformer implementing the “hashing trick” for fast, low-memory feature extraction from string fields by Lars Buitinck andfeature_extraction.text.HashingVectorizer
for text documents by Olivier Grisel See Feature hashing and Vectorizing a large text corpus with the hashing trick for the documentation and sample usage.pipeline.FeatureUnion
, a transformer that concatenates results of several other transformers by Andreas Müller. See FeatureUnion: composite feature spaces in the user guide.random_projection.GaussianRandomProjection
,random_projection.SparseRandomProjection
and the functionrandom_projection.johnson_lindenstrauss_min_dim
. The first two are transformers implementing Gaussian and sparse random projection matrix by Olivier Grisel and Arnaud Joly. See Random Projection in the user guide.kernel_approximation.Nystroem
, a transformer for approximating arbitrary kernels by Andreas Müller. See Nystroem Method for Kernel Approximation in the user guide.preprocessing.OneHotEncoder
, a transformer that computes binary encodings of categorical features by Andreas Müller. See Encoding categorical features in the user guide.linear_model.PassiveAggressiveClassifier
andlinear_model.PassiveAggressiveRegressor
, predictors implementing an efficient stochastic optimization for linear models by Rob Zinkov and Mathieu Blondel. See Passive Aggressive Algorithms in the user guide.ensemble.RandomTreesEmbedding
, a transformer for creating high-dimensional sparse representations using ensembles of totally random trees by Andreas Müller. See Totally Random Trees Embedding in the user guide.manifold.SpectralEmbedding
and functionmanifold.spectral_embedding
, implementing the “laplacian eigenmaps” transformation for non-linear dimensionality reduction by Wei Li. See Spectral Embedding in the user guide.isotonic.IsotonicRegression
by Fabian Pedregosa, Alexandre Gramfort and Nelle Varoquaux,
Changelog¶
metrics.zero_one_loss
(formerlymetrics.zero_one
) now has option for normalized output that reports the fraction of misclassifications, rather than the raw number of misclassifications. By Kyle Beauchamp.tree.DecisionTreeClassifier
and all derived ensemble models now support sample weighting, by Noel Dawe and Gilles Louppe.- Speedup improvement when using bootstrap samples in forests of randomized trees, by Peter Prettenhofer and Gilles Louppe.
- Partial dependence plots for Gradient Tree Boosting in
ensemble.partial_dependence.partial_dependence
by Peter Prettenhofer. See Partial Dependence Plots for an example. - The table of contents on the website has now been made expandable by Jaques Grobler.
feature_selection.SelectPercentile
now breaks ties deterministically instead of returning all equally ranked features.feature_selection.SelectKBest
andfeature_selection.SelectPercentile
are more numerically stable since they use scores, rather than p-values, to rank results. This means that they might sometimes select different features than they did previously.- Ridge regression and ridge classification fitting with
sparse_cg
solver no longer has quadratic memory complexity, by Lars Buitinck and Fabian Pedregosa. - Ridge regression and ridge classification now support a new fast solver
called
lsqr
, by Mathieu Blondel. - Speed up of
metrics.precision_recall_curve
by Conrad Lee. - Added support for reading/writing svmlight files with pairwise
preference attribute (qid in svmlight file format) in
datasets.dump_svmlight_file
anddatasets.load_svmlight_file
by Fabian Pedregosa. - Faster and more robust
metrics.confusion_matrix
and Clustering performance evaluation by Wei Li. cross_validation.cross_val_score
now works with precomputed kernels and affinity matrices, by Andreas Müller.- LARS algorithm made more numerically stable with heuristics to drop regressors too correlated as well as to stop the path when numerical noise becomes predominant, by Gael Varoquaux.
- Faster implementation of
metrics.precision_recall_curve
by Conrad Lee. - New kernel
metrics.chi2_kernel
by Andreas Müller, often used in computer vision applications. - Fix of longstanding bug in
naive_bayes.BernoulliNB
fixed by Shaun Jackman. - Implemented
predict_proba
inmulticlass.OneVsRestClassifier
, by Andrew Winterman. - Improve consistency in gradient boosting: estimators
ensemble.GradientBoostingRegressor
andensemble.GradientBoostingClassifier
use the estimatortree.DecisionTreeRegressor
instead of thetree._tree.Tree
data structure by Arnaud Joly. - Fixed a floating point exception in the decision trees module, by Seberg.
- Fix
metrics.roc_curve
fails when y_true has only one class by Wei Li. - Add the
metrics.mean_absolute_error
function which computes the mean absolute error. Themetrics.mean_squared_error
,metrics.mean_absolute_error
andmetrics.r2_score
metrics support multioutput by Arnaud Joly. - Fixed
class_weight
support insvm.LinearSVC
andlinear_model.LogisticRegression
by Andreas Müller. The meaning ofclass_weight
was reversed as erroneously higher weight meant less positives of a given class in earlier releases. - Improve narrative documentation and consistency in
sklearn.metrics
for regression and classification metrics by Arnaud Joly. - Fixed a bug in
sklearn.svm.SVC
when using csr-matrices with unsorted indices by Xinfan Meng and Andreas Müller. MiniBatchKMeans
: Add random reassignment of cluster centers with little observations attached to them, by Gael Varoquaux.
API changes summary¶
- Renamed all occurrences of
n_atoms
ton_components
for consistency. This applies todecomposition.DictionaryLearning
,decomposition.MiniBatchDictionaryLearning
,decomposition.dict_learning
,decomposition.dict_learning_online
. - Renamed all occurrences of
max_iters
tomax_iter
for consistency. This applies tosemi_supervised.LabelPropagation
andsemi_supervised.label_propagation.LabelSpreading
. - Renamed all occurrences of
learn_rate
tolearning_rate
for consistency inensemble.BaseGradientBoosting
andensemble.GradientBoostingRegressor
. - The module
sklearn.linear_model.sparse
is gone. Sparse matrix support was already integrated into the “regular” linear models. sklearn.metrics.mean_square_error
, which incorrectly returned the accumulated error, was removed. Usemean_squared_error
instead.- Passing
class_weight
parameters tofit
methods is no longer supported. Pass them to estimator constructors instead. - GMMs no longer have
decode
andrvs
methods. Use thescore
,predict
orsample
methods instead. - The
solver
fit option in Ridge regression and classification is now deprecated and will be removed in v0.14. Use the constructor option instead. feature_extraction.text.DictVectorizer
now returns sparse matrices in the CSR format, instead of COO.- Renamed
k
incross_validation.KFold
andcross_validation.StratifiedKFold
ton_folds
, renamedn_bootstraps
ton_iter
incross_validation.Bootstrap
. - Renamed all occurrences of
n_iterations
ton_iter
for consistency. This applies tocross_validation.ShuffleSplit
,cross_validation.StratifiedShuffleSplit
,utils.randomized_range_finder
andutils.randomized_svd
. - Replaced
rho
inlinear_model.ElasticNet
andlinear_model.SGDClassifier
byl1_ratio
. Therho
parameter had different meanings;l1_ratio
was introduced to avoid confusion. It has the same meaning as previouslyrho
inlinear_model.ElasticNet
and(1-rho)
inlinear_model.SGDClassifier
. linear_model.LassoLars
andlinear_model.Lars
now store a list of paths in the case of multiple targets, rather than an array of paths.- The attribute
gmm
ofhmm.GMMHMM
was renamed togmm_
to adhere more strictly with the API. cluster.spectral_embedding
was moved tomanifold.spectral_embedding
.- Renamed
eig_tol
inmanifold.spectral_embedding
,cluster.SpectralClustering
toeigen_tol
, renamedmode
toeigen_solver
. - Renamed
mode
inmanifold.spectral_embedding
andcluster.SpectralClustering
toeigen_solver
. classes_
andn_classes_
attributes oftree.DecisionTreeClassifier
and all derived ensemble models are now flat in case of single output problems and nested in case of multi-output problems.- The
estimators_
attribute ofensemble.gradient_boosting.GradientBoostingRegressor
andensemble.gradient_boosting.GradientBoostingClassifier
is now an array of :class:’tree.DecisionTreeRegressor’. - Renamed
chunk_size
tobatch_size
indecomposition.MiniBatchDictionaryLearning
anddecomposition.MiniBatchSparsePCA
for consistency. svm.SVC
andsvm.NuSVC
now provide aclasses_
attribute and support arbitrary dtypes for labelsy
. Also, the dtype returned bypredict
now reflects the dtype ofy
duringfit
(used to benp.float
).- Changed default test_size in
cross_validation.train_test_split
to None, added possibility to infertest_size
fromtrain_size
incross_validation.ShuffleSplit
andcross_validation.StratifiedShuffleSplit
. - Renamed function
sklearn.metrics.zero_one
tosklearn.metrics.zero_one_loss
. Be aware that the default behavior insklearn.metrics.zero_one_loss
is different fromsklearn.metrics.zero_one
:normalize=False
is changed tonormalize=True
. - Renamed function
metrics.zero_one_score
tometrics.accuracy_score
. datasets.make_circles
now has the same number of inner and outer points.- In the Naive Bayes classifiers, the
class_prior
parameter was moved fromfit
to__init__
.
People¶
List of contributors for release 0.13 by number of commits.
- 364 Andreas Müller
- 143 Arnaud Joly
- 137 Peter Prettenhofer
- 131 Gael Varoquaux
- 117 Mathieu Blondel
- 108 Lars Buitinck
- 106 Wei Li
- 101 Olivier Grisel
- 65 Vlad Niculae
- 54 Gilles Louppe
- 40 Jaques Grobler
- 38 Alexandre Gramfort
- 30 Rob Zinkov
- 19 Aymeric Masurelle
- 18 Andrew Winterman
- 17 Fabian Pedregosa
- 17 Nelle Varoquaux
- 16 Christian Osendorfer
- 14 Daniel Nouri
- 13 Virgile Fritsch
- 13 syhw
- 12 Satrajit Ghosh
- 10 Corey Lynch
- 10 Kyle Beauchamp
- 9 Brian Cheung
- 9 Immanuel Bayer
- 9 mr.Shu
- 8 Conrad Lee
- 8 James Bergstra
- 7 Tadej Janež
- 6 Brian Cajes
- 6 Jake Vanderplas
- 6 Michael
- 6 Noel Dawe
- 6 Tiago Nunes
- 6 cow
- 5 Anze
- 5 Shiqiao Du
- 4 Christian Jauvin
- 4 Jacques Kvam
- 4 Richard T. Guy
- 4 Robert Layton
- 3 Alexandre Abraham
- 3 Doug Coleman
- 3 Scott Dickerson
- 2 ApproximateIdentity
- 2 John Benediktsson
- 2 Mark Veronda
- 2 Matti Lyra
- 2 Mikhail Korobov
- 2 Xinfan Meng
- 1 Alejandro Weinstein
- 1 Alexandre Passos
- 1 Christoph Deil
- 1 Eugene Nizhibitsky
- 1 Kenneth C. Arnold
- 1 Luis Pedro Coelho
- 1 Miroslav Batchkarov
- 1 Pavel
- 1 Sebastian Berg
- 1 Shaun Jackman
- 1 Subhodeep Moitra
- 1 bob
- 1 dengemann
- 1 emanuele
- 1 x006
Version 0.12.1¶
October 8, 2012
The 0.12.1 release is a bug-fix release with no additional features, but is instead a set of bug fixes
Changelog¶
- Improved numerical stability in spectral embedding by Gael Varoquaux
- Doctest under windows 64bit by Gael Varoquaux
- Documentation fixes for elastic net by Andreas Müller and Alexandre Gramfort
- Proper behavior with fortran-ordered NumPy arrays by Gael Varoquaux
- Make GridSearchCV work with non-CSR sparse matrix by Lars Buitinck
- Fix parallel computing in MDS by Gael Varoquaux
- Fix Unicode support in count vectorizer by Andreas Müller
- Fix MinCovDet breaking with X.shape = (3, 1) by Virgile Fritsch
- Fix clone of SGD objects by Peter Prettenhofer
- Stabilize GMM by Virgile Fritsch
People¶
Version 0.12¶
September 4, 2012
Changelog¶
- Various speed improvements of the decision trees module, by Gilles Louppe.
ensemble.GradientBoostingRegressor
andensemble.GradientBoostingClassifier
now support feature subsampling via themax_features
argument, by Peter Prettenhofer.- Added Huber and Quantile loss functions to
ensemble.GradientBoostingRegressor
, by Peter Prettenhofer. - Decision trees and forests of randomized trees now support multi-output classification and regression problems, by Gilles Louppe.
- Added
preprocessing.LabelEncoder
, a simple utility class to normalize labels or transform non-numerical labels, by Mathieu Blondel. - Added the epsilon-insensitive loss and the ability to make probabilistic predictions with the modified huber loss in Stochastic Gradient Descent, by Mathieu Blondel.
- Added Multi-dimensional Scaling (MDS), by Nelle Varoquaux.
- SVMlight file format loader now detects compressed (gzip/bzip2) files and decompresses them on the fly, by Lars Buitinck.
- SVMlight file format serializer now preserves double precision floating point values, by Olivier Grisel.
- A common testing framework for all estimators was added, by Andreas Müller.
- Understandable error messages for estimators that do not accept sparse input by Gael Varoquaux
- Speedups in hierarchical clustering by Gael Varoquaux. In particular building the tree now supports early stopping. This is useful when the number of clusters is not small compared to the number of samples.
- Add MultiTaskLasso and MultiTaskElasticNet for joint feature selection, by Alexandre Gramfort.
- Added
metrics.auc_score
andmetrics.average_precision_score
convenience functions by Andreas Müller. - Improved sparse matrix support in the Feature selection module by Andreas Müller.
- New word boundaries-aware character n-gram analyzer for the Text feature extraction module by @kernc.
- Fixed bug in spectral clustering that led to single point clusters by Andreas Müller.
- In
feature_extraction.text.CountVectorizer
, added an option to ignore infrequent words,min_df
by Andreas Müller. - Add support for multiple targets in some linear models (ElasticNet, Lasso and OrthogonalMatchingPursuit) by Vlad Niculae and Alexandre Gramfort.
- Fixes in
decomposition.ProbabilisticPCA
score function by Wei Li. - Fixed feature importance computation in Gradient Tree Boosting.
API changes summary¶
- The old
scikits.learn
package has disappeared; all code should import fromsklearn
instead, which was introduced in 0.9. - In
metrics.roc_curve
, thethresholds
array is now returned with it’s order reversed, in order to keep it consistent with the order of the returnedfpr
andtpr
. - In
hmm
objects, likehmm.GaussianHMM
,hmm.MultinomialHMM
, etc., all parameters must be passed to the object when initialising it and not throughfit
. Nowfit
will only accept the data as an input parameter. - For all SVM classes, a faulty behavior of
gamma
was fixed. Previously, the default gamma value was only computed the first timefit
was called and then stored. It is now recalculated on every call tofit
. - All
Base
classes are now abstract meta classes so that they can not be instantiated. cluster.ward_tree
now also returns the parent array. This is necessary for early-stopping in which case the tree is not completely built.- In
feature_extraction.text.CountVectorizer
the parametersmin_n
andmax_n
were joined to the parametern_gram_range
to enable grid-searching both at once. - In
feature_extraction.text.CountVectorizer
, words that appear only in one document are now ignored by default. To reproduce the previous behavior, setmin_df=1
. - Fixed API inconsistency:
linear_model.SGDClassifier.predict_proba
now returns 2d array when fit on two classes. - Fixed API inconsistency:
discriminant_analysis.QuadraticDiscriminantAnalysis.decision_function
anddiscriminant_analysis.LinearDiscriminantAnalysis.decision_function
now return 1d arrays when fit on two classes. - Grid of alphas used for fitting
linear_model.LassoCV
andlinear_model.ElasticNetCV
is now stored in the attributealphas_
rather than overriding the init parameteralphas
. - Linear models when alpha is estimated by cross-validation store
the estimated value in the
alpha_
attribute rather than justalpha
orbest_alpha
. ensemble.GradientBoostingClassifier
now supportsensemble.GradientBoostingClassifier.staged_predict_proba
, andensemble.GradientBoostingClassifier.staged_predict
.svm.sparse.SVC
and other sparse SVM classes are now deprecated. The all classes in the Support Vector Machines module now automatically select the sparse or dense representation base on the input.- All clustering algorithms now interpret the array
X
given tofit
as input data, in particularcluster.SpectralClustering
andcluster.AffinityPropagation
which previously expected affinity matrices. - For clustering algorithms that take the desired number of clusters as a parameter,
this parameter is now called
n_clusters
.
People¶
- 267 Andreas Müller
- 94 Gilles Louppe
- 89 Gael Varoquaux
- 79 Peter Prettenhofer
- 60 Mathieu Blondel
- 57 Alexandre Gramfort
- 52 Vlad Niculae
- 45 Lars Buitinck
- 44 Nelle Varoquaux
- 37 Jaques Grobler
- 30 Alexis Mignon
- 30 Immanuel Bayer
- 27 Olivier Grisel
- 16 Subhodeep Moitra
- 13 Yannick Schwartz
- 12 @kernc
- 11 Virgile Fritsch
- 9 Daniel Duckworth
- 9 Fabian Pedregosa
- 9 Robert Layton
- 8 John Benediktsson
- 7 Marko Burjek
- 5 Nicolas Pinto
- 4 Alexandre Abraham
- 4 Jake Vanderplas
- 3 Brian Holt
- 3 Edouard Duchesnay
- 3 Florian Hoenig
- 3 flyingimmidev
- 2 Francois Savard
- 2 Hannes Schulz
- 2 Peter Welinder
- 2 Yaroslav Halchenko
- 2 Wei Li
- 1 Alex Companioni
- 1 Brandyn A. White
- 1 Bussonnier Matthias
- 1 Charles-Pierre Astolfi
- 1 Dan O’Huiginn
- 1 David Cournapeau
- 1 Keith Goodman
- 1 Ludwig Schwardt
- 1 Olivier Hervieu
- 1 Sergio Medina
- 1 Shiqiao Du
- 1 Tim Sheerman-Chase
- 1 buguen
Version 0.11¶
May 7, 2012
Changelog¶
Highlights¶
- Gradient boosted regression trees (Gradient Tree Boosting) for classification and regression by Peter Prettenhofer and Scott White .
- Simple dict-based feature loader with support for categorical variables
(
feature_extraction.DictVectorizer
) by Lars Buitinck. - Added Matthews correlation coefficient (
metrics.matthews_corrcoef
) and added macro and micro average options tometrics.precision_score
,metrics.recall_score
andmetrics.f1_score
by Satrajit Ghosh. - Out of Bag Estimates of generalization error for Ensemble methods by Andreas Müller.
- Randomized sparse linear models for feature selection, by Alexandre Gramfort and Gael Varoquaux
- Label Propagation for semi-supervised learning, by Clay Woolam. Note the semi-supervised API is still work in progress, and may change.
- Added BIC/AIC model selection to classical Gaussian mixture models and unified the API with the remainder of scikit-learn, by Bertrand Thirion
- Added
sklearn.cross_validation.StratifiedShuffleSplit
, which is asklearn.cross_validation.ShuffleSplit
with balanced splits, by Yannick Schwartz. sklearn.neighbors.NearestCentroid
classifier added, along with ashrink_threshold
parameter, which implements shrunken centroid classification, by Robert Layton.
Other changes¶
- Merged dense and sparse implementations of Stochastic Gradient Descent module and
exposed utility extension types for sequential
datasets
seq_dataset
and weight vectorsweight_vector
by Peter Prettenhofer. - Added
partial_fit
(support for online/minibatch learning) and warm_start to the Stochastic Gradient Descent module by Mathieu Blondel. - Dense and sparse implementations of Support Vector Machines classes and
linear_model.LogisticRegression
merged by Lars Buitinck. - Regressors can now be used as base estimator in the Multiclass and multilabel algorithms module by Mathieu Blondel.
- Added n_jobs option to
metrics.pairwise.pairwise_distances
andmetrics.pairwise.pairwise_kernels
for parallel computation, by Mathieu Blondel. - K-means can now be run in parallel, using the
n_jobs
argument to either K-means orKMeans
, by Robert Layton. - Improved Cross-validation: evaluating estimator performance and Tuning the hyper-parameters of an estimator documentation
and introduced the new
cross_validation.train_test_split
helper function by Olivier Grisel svm.SVC
memberscoef_
andintercept_
changed sign for consistency withdecision_function
; forkernel==linear
,coef_
was fixed in the one-vs-one case, by Andreas Müller.- Performance improvements to efficient leave-one-out cross-validated
Ridge regression, esp. for the
n_samples > n_features
case, inlinear_model.RidgeCV
, by Reuben Fletcher-Costin. - Refactoring and simplification of the Text feature extraction API and fixed a bug that caused possible negative IDF, by Olivier Grisel.
- Beam pruning option in
_BaseHMM
module has been removed since it is difficult to Cythonize. If you are interested in contributing a Cython version, you can use the python version in the git history as a reference. - Classes in Nearest Neighbors now support arbitrary Minkowski metric for
nearest neighbors searches. The metric can be specified by argument
p
.
API changes summary¶
covariance.EllipticEnvelop
is now deprecated - Please usecovariance.EllipticEnvelope
instead.NeighborsClassifier
andNeighborsRegressor
are gone in the module Nearest Neighbors. Use the classesKNeighborsClassifier
,RadiusNeighborsClassifier
,KNeighborsRegressor
and/orRadiusNeighborsRegressor
instead.- Sparse classes in the Stochastic Gradient Descent module are now deprecated.
- In
mixture.GMM
,mixture.DPGMM
andmixture.VBGMM
, parameters must be passed to an object when initialising it and not throughfit
. Nowfit
will only accept the data as an input parameter. - methods
rvs
anddecode
inGMM
module are now deprecated.sample
andscore
orpredict
should be used instead. - attribute
_scores
and_pvalues
in univariate feature selection objects are now deprecated.scores_
orpvalues_
should be used instead. - In
LogisticRegression
,LinearSVC
,SVC
andNuSVC
, theclass_weight
parameter is now an initialization parameter, not a parameter to fit. This makes grid searches over this parameter possible. - LFW
data
is now always shape(n_samples, n_features)
to be consistent with the Olivetti faces dataset. Useimages
andpairs
attribute to access the natural images shapes instead. - In
svm.LinearSVC
, the meaning of themulti_class
parameter changed. Options now are'ovr'
and'crammer_singer'
, with'ovr'
being the default. This does not change the default behavior but hopefully is less confusing. - Class
feature_selection.text.Vectorizer
is deprecated and replaced byfeature_selection.text.TfidfVectorizer
. - The preprocessor / analyzer nested structure for text feature
extraction has been removed. All those features are
now directly passed as flat constructor arguments
to
feature_selection.text.TfidfVectorizer
andfeature_selection.text.CountVectorizer
, in particular the following parameters are now used: analyzer
can be'word'
or'char'
to switch the default analysis scheme, or use a specific python callable (as previously).tokenizer
andpreprocessor
have been introduced to make it still possible to customize those steps with the new API.input
explicitly control how to interpret the sequence passed tofit
andpredict
: filenames, file objects or direct (byte or Unicode) strings.- charset decoding is explicit and strict by default.
- the
vocabulary
, fitted or not is now stored in thevocabulary_
attribute to be consistent with the project conventions. - Class
feature_selection.text.TfidfVectorizer
now derives directly fromfeature_selection.text.CountVectorizer
to make grid search trivial. - methods
rvs
in_BaseHMM
module are now deprecated.sample
should be used instead. - Beam pruning option in
_BaseHMM
module is removed since it is difficult to be Cythonized. If you are interested, you can look in the history codes by git. - The SVMlight format loader now supports files with both zero-based and one-based column indices, since both occur “in the wild”.
- Arguments in class
ShuffleSplit
are now consistent withStratifiedShuffleSplit
. Argumentstest_fraction
andtrain_fraction
are deprecated and renamed totest_size
andtrain_size
and can accept bothfloat
andint
. - Arguments in class
Bootstrap
are now consistent withStratifiedShuffleSplit
. Argumentsn_test
andn_train
are deprecated and renamed totest_size
andtrain_size
and can accept bothfloat
andint
. - Argument
p
added to classes in Nearest Neighbors to specify an arbitrary Minkowski metric for nearest neighbors searches.
People¶
- 282 Andreas Müller
- 239 Peter Prettenhofer
- 198 Gael Varoquaux
- 129 Olivier Grisel
- 114 Mathieu Blondel
- 103 Clay Woolam
- 96 Lars Buitinck
- 88 Jaques Grobler
- 82 Alexandre Gramfort
- 50 Bertrand Thirion
- 42 Robert Layton
- 28 flyingimmidev
- 26 Jake Vanderplas
- 26 Shiqiao Du
- 21 Satrajit Ghosh
- 17 David Marek
- 17 Gilles Louppe
- 14 Vlad Niculae
- 11 Yannick Schwartz
- 10 Fabian Pedregosa
- 9 fcostin
- 7 Nick Wilson
- 5 Adrien Gaidon
- 5 Nicolas Pinto
- 4 David Warde-Farley
- 5 Nelle Varoquaux
- 5 Emmanuelle Gouillart
- 3 Joonas Sillanpää
- 3 Paolo Losi
- 2 Charles McCarthy
- 2 Roy Hyunjin Han
- 2 Scott White
- 2 ibayer
- 1 Brandyn White
- 1 Carlos Scheidegger
- 1 Claire Revillet
- 1 Conrad Lee
- 1 Edouard Duchesnay
- 1 Jan Hendrik Metzen
- 1 Meng Xinfan
- 1 Rob Zinkov
- 1 Shiqiao
- 1 Udi Weinsberg
- 1 Virgile Fritsch
- 1 Xinfan Meng
- 1 Yaroslav Halchenko
- 1 jansoe
- 1 Leon Palafox
Version 0.10¶
January 11, 2012
Changelog¶
- Python 2.5 compatibility was dropped; the minimum Python version needed to use scikit-learn is now 2.6.
- Sparse inverse covariance estimation using the graph Lasso, with associated cross-validated estimator, by Gael Varoquaux
- New Tree module by Brian Holt, Peter Prettenhofer, Satrajit Ghosh and Gilles Louppe. The module comes with complete documentation and examples.
- Fixed a bug in the RFE module by Gilles Louppe (issue #378).
- Fixed a memory leak in Support Vector Machines module by Brian Holt (issue #367).
- Faster tests by Fabian Pedregosa and others.
- Silhouette Coefficient cluster analysis evaluation metric added as
sklearn.metrics.silhouette_score
by Robert Layton. - Fixed a bug in K-means in the handling of the
n_init
parameter: the clustering algorithm used to be runn_init
times but the last solution was retained instead of the best solution by Olivier Grisel. - Minor refactoring in Stochastic Gradient Descent module; consolidated dense and sparse predict methods; Enhanced test time performance by converting model parameters to fortran-style arrays after fitting (only multi-class).
- Adjusted Mutual Information metric added as
sklearn.metrics.adjusted_mutual_info_score
by Robert Layton. - Models like SVC/SVR/LinearSVC/LogisticRegression from libsvm/liblinear now support scaling of C regularization parameter by the number of samples by Alexandre Gramfort.
- New Ensemble Methods module by Gilles Louppe and Brian Holt. The module comes with the random forest algorithm and the extra-trees method, along with documentation and examples.
- Novelty and Outlier Detection: outlier and novelty detection, by Virgile Fritsch.
- Kernel Approximation: a transform implementing kernel approximation for fast SGD on non-linear kernels by Andreas Müller.
- Fixed a bug due to atom swapping in Orthogonal Matching Pursuit (OMP) by Vlad Niculae.
- Sparse coding with a precomputed dictionary by Vlad Niculae.
- Mini Batch K-Means performance improvements by Olivier Grisel.
- K-means support for sparse matrices by Mathieu Blondel.
- Improved documentation for developers and for the
sklearn.utils
module, by Jake Vanderplas. - Vectorized 20newsgroups dataset loader
(
sklearn.datasets.fetch_20newsgroups_vectorized
) by Mathieu Blondel. - Multiclass and multilabel algorithms by Lars Buitinck.
- Utilities for fast computation of mean and variance for sparse matrices by Mathieu Blondel.
- Make
sklearn.preprocessing.scale
andsklearn.preprocessing.Scaler
work on sparse matrices by Olivier Grisel - Feature importances using decision trees and/or forest of trees, by Gilles Louppe.
- Parallel implementation of forests of randomized trees by Gilles Louppe.
sklearn.cross_validation.ShuffleSplit
can subsample the train sets as well as the test sets by Olivier Grisel.- Errors in the build of the documentation fixed by Andreas Müller.
API changes summary¶
Here are the code migration instructions when upgrading from scikit-learn version 0.9:
Some estimators that may overwrite their inputs to save memory previously had
overwrite_
parameters; these have been replaced withcopy_
parameters with exactly the opposite meaning.This particularly affects some of the estimators in
linear_model
. The default behavior is still to copy everything passed in.The SVMlight dataset loader
sklearn.datasets.load_svmlight_file
no longer supports loading two files at once; useload_svmlight_files
instead. Also, the (unused)buffer_mb
parameter is gone.Sparse estimators in the Stochastic Gradient Descent module use dense parameter vector
coef_
instead ofsparse_coef_
. This significantly improves test time performance.The Covariance estimation module now has a robust estimator of covariance, the Minimum Covariance Determinant estimator.
Cluster evaluation metrics in
metrics.cluster
have been refactored but the changes are backwards compatible. They have been moved to themetrics.cluster.supervised
, along withmetrics.cluster.unsupervised
which contains the Silhouette Coefficient.The
permutation_test_score
function now behaves the same way ascross_val_score
(i.e. uses the mean score across the folds.)Cross Validation generators now use integer indices (
indices=True
) by default instead of boolean masks. This make it more intuitive to use with sparse matrix data.The functions used for sparse coding,
sparse_encode
andsparse_encode_parallel
have been combined intosklearn.decomposition.sparse_encode
, and the shapes of the arrays have been transposed for consistency with the matrix factorization setting, as opposed to the regression setting.Fixed an off-by-one error in the SVMlight/LibSVM file format handling; files generated using
sklearn.datasets.dump_svmlight_file
should be re-generated. (They should continue to work, but accidentally had one extra column of zeros prepended.)BaseDictionaryLearning
class replaced bySparseCodingMixin
.sklearn.utils.extmath.fast_svd
has been renamedsklearn.utils.extmath.randomized_svd
and the default oversampling is now fixed to 10 additional random vectors instead of doubling the number of components to extract. The new behavior follows the reference paper.
People¶
The following people contributed to scikit-learn since last release:
- 246 Andreas Müller
- 242 Olivier Grisel
- 220 Gilles Louppe
- 183 Brian Holt
- 166 Gael Varoquaux
- 144 Lars Buitinck
- 73 Vlad Niculae
- 65 Peter Prettenhofer
- 64 Fabian Pedregosa
- 60 Robert Layton
- 55 Mathieu Blondel
- 52 Jake Vanderplas
- 44 Noel Dawe
- 38 Alexandre Gramfort
- 24 Virgile Fritsch
- 23 Satrajit Ghosh
- 3 Jan Hendrik Metzen
- 3 Kenneth C. Arnold
- 3 Shiqiao Du
- 3 Tim Sheerman-Chase
- 3 Yaroslav Halchenko
- 2 Bala Subrahmanyam Varanasi
- 2 DraXus
- 2 Michael Eickenberg
- 1 Bogdan Trach
- 1 Félix-Antoine Fortin
- 1 Juan Manuel Caicedo Carvajal
- 1 Nelle Varoquaux
- 1 Nicolas Pinto
- 1 Tiziano Zito
- 1 Xinfan Meng
Version 0.9¶
September 21, 2011
scikit-learn 0.9 was released on September 2011, three months after the 0.8 release and includes the new modules Manifold learning, The Dirichlet Process as well as several new algorithms and documentation improvements.
This release also includes the dictionary-learning work developed by Vlad Niculae as part of the Google Summer of Code program.
Changelog¶
- New Manifold learning module by Jake Vanderplas and Fabian Pedregosa.
- New Dirichlet Process Gaussian Mixture Model by Alexandre Passos
- Nearest Neighbors module refactoring by Jake Vanderplas : general refactoring, support for sparse matrices in input, speed and documentation improvements. See the next section for a full list of API changes.
- Improvements on the Feature selection module by Gilles Louppe : refactoring of the RFE classes, documentation rewrite, increased efficiency and minor API changes.
- Sparse principal components analysis (SparsePCA and MiniBatchSparsePCA) by Vlad Niculae, Gael Varoquaux and Alexandre Gramfort
- Printing an estimator now behaves independently of architectures and Python version thanks to Jean Kossaifi.
- Loader for libsvm/svmlight format by Mathieu Blondel and Lars Buitinck
- Documentation improvements: thumbnails in example gallery by Fabian Pedregosa.
- Important bugfixes in Support Vector Machines module (segfaults, bad performance) by Fabian Pedregosa.
- Added Multinomial Naive Bayes and Bernoulli Naive Bayes by Lars Buitinck
- Text feature extraction optimizations by Lars Buitinck
- Chi-Square feature selection
(
feature_selection.univariate_selection.chi2
) by Lars Buitinck. - Sample generators module refactoring by Gilles Louppe
- Multiclass and multilabel algorithms by Mathieu Blondel
- Ball tree rewrite by Jake Vanderplas
- Implementation of DBSCAN algorithm by Robert Layton
- Kmeans predict and transform by Robert Layton
- Preprocessing module refactoring by Olivier Grisel
- Faster mean shift by Conrad Lee
- New
Bootstrap
, Random permutations cross-validation a.k.a. Shuffle & Split and various other improvements in cross validation schemes by Olivier Grisel and Gael Varoquaux - Adjusted Rand index and V-Measure clustering evaluation metrics by Olivier Grisel
- Added
Orthogonal Matching Pursuit
by Vlad Niculae - Added 2D-patch extractor utilities in the Feature extraction module by Vlad Niculae
- Implementation of
linear_model.LassoLarsCV
(cross-validated Lasso solver using the Lars algorithm) andlinear_model.LassoLarsIC
(BIC/AIC model selection in Lars) by Gael Varoquaux and Alexandre Gramfort - Scalability improvements to
metrics.roc_curve
by Olivier Hervieu - Distance helper functions
metrics.pairwise.pairwise_distances
andmetrics.pairwise.pairwise_kernels
by Robert Layton Mini-Batch K-Means
by Nelle Varoquaux and Peter Prettenhofer.- Downloading datasets from the mldata.org repository utilities by Pietro Berkes.
- The Olivetti faces dataset by David Warde-Farley.
API changes summary¶
Here are the code migration instructions when upgrading from scikit-learn version 0.8:
The
scikits.learn
package was renamedsklearn
. There is still ascikits.learn
package alias for backward compatibility.Third-party projects with a dependency on scikit-learn 0.9+ should upgrade their codebase. For instance, under Linux / MacOSX just run (make a backup first!):
find -name "*.py" | xargs sed -i 's/\bscikits.learn\b/sklearn/g'
Estimators no longer accept model parameters as
fit
arguments: instead all parameters must be only be passed as constructor arguments or using the now publicset_params
method inherited frombase.BaseEstimator
.Some estimators can still accept keyword arguments on the
fit
but this is restricted to data-dependent values (e.g. a Gram matrix or an affinity matrix that are precomputed from theX
data matrix.The
cross_val
package has been renamed tocross_validation
although there is also across_val
package alias in place for backward compatibility.Third-party projects with a dependency on scikit-learn 0.9+ should upgrade their codebase. For instance, under Linux / MacOSX just run (make a backup first!):
find -name "*.py" | xargs sed -i 's/\bcross_val\b/cross_validation/g'
The
score_func
argument of thesklearn.cross_validation.cross_val_score
function is now expected to accepty_test
andy_predicted
as only arguments for classification and regression tasks orX_test
for unsupervised estimators.gamma
parameter for support vector machine algorithms is set to1 / n_features
by default, instead of1 / n_samples
.The
sklearn.hmm
has been marked as orphaned: it will be removed from scikit-learn in version 0.11 unless someone steps up to contribute documentation, examples and fix lurking numerical stability issues.sklearn.neighbors
has been made into a submodule. The two previously available estimators,NeighborsClassifier
andNeighborsRegressor
have been marked as deprecated. Their functionality has been divided among five new classes:NearestNeighbors
for unsupervised neighbors searches,KNeighborsClassifier
&RadiusNeighborsClassifier
for supervised classification problems, andKNeighborsRegressor
&RadiusNeighborsRegressor
for supervised regression problems.sklearn.ball_tree.BallTree
has been moved tosklearn.neighbors.BallTree
. Using the former will generate a warning.sklearn.linear_model.LARS()
and related classes (LassoLARS, LassoLARSCV, etc.) have been renamed tosklearn.linear_model.Lars()
.All distance metrics and kernels in
sklearn.metrics.pairwise
now have a Y parameter, which by default is None. If not given, the result is the distance (or kernel similarity) between each sample in Y. If given, the result is the pairwise distance (or kernel similarity) between samples in X to Y.sklearn.metrics.pairwise.l1_distance
is now calledmanhattan_distance
, and by default returns the pairwise distance. For the component wise distance, set the parametersum_over_features
toFalse
.
Backward compatibility package aliases and other deprecated classes and functions will be removed in version 0.11.
People¶
38 people contributed to this release.
- 387 Vlad Niculae
- 320 Olivier Grisel
- 192 Lars Buitinck
- 179 Gael Varoquaux
- 168 Fabian Pedregosa (INRIA, Parietal Team)
- 127 Jake Vanderplas
- 120 Mathieu Blondel
- 85 Alexandre Passos
- 67 Alexandre Gramfort
- 57 Peter Prettenhofer
- 56 Gilles Louppe
- 42 Robert Layton
- 38 Nelle Varoquaux
- 32 Jean Kossaifi
- 30 Conrad Lee
- 22 Pietro Berkes
- 18 andy
- 17 David Warde-Farley
- 12 Brian Holt
- 11 Robert
- 8 Amit Aides
- 8 Virgile Fritsch
- 7 Yaroslav Halchenko
- 6 Salvatore Masecchia
- 5 Paolo Losi
- 4 Vincent Schut
- 3 Alexis Metaireau
- 3 Bryan Silverthorn
- 3 Andreas Müller
- 2 Minwoo Jake Lee
- 1 Emmanuelle Gouillart
- 1 Keith Goodman
- 1 Lucas Wiman
- 1 Nicolas Pinto
- 1 Thouis (Ray) Jones
- 1 Tim Sheerman-Chase
Version 0.8¶
May 11, 2011
scikit-learn 0.8 was released on May 2011, one month after the first “international” scikit-learn coding sprint and is marked by the inclusion of important modules: Hierarchical clustering, Cross decomposition, Non-negative matrix factorization (NMF or NNMF), initial support for Python 3 and by important enhancements and bug fixes.
Changelog¶
Several new modules where introduced during this release:
- New Hierarchical clustering module by Vincent Michel, Bertrand Thirion, Alexandre Gramfort and Gael Varoquaux.
- Kernel PCA implementation by Mathieu Blondel
- The Labeled Faces in the Wild face recognition dataset by Olivier Grisel.
- New Cross decomposition module by Edouard Duchesnay.
- Non-negative matrix factorization (NMF or NNMF) module Vlad Niculae
- Implementation of the Oracle Approximating Shrinkage algorithm by Virgile Fritsch in the Covariance estimation module.
Some other modules benefited from significant improvements or cleanups.
- Initial support for Python 3: builds and imports cleanly, some modules are usable while others have failing tests by Fabian Pedregosa.
decomposition.PCA
is now usable from the Pipeline object by Olivier Grisel.- Guide How to optimize for speed by Olivier Grisel.
- Fixes for memory leaks in libsvm bindings, 64-bit safer BallTree by Lars Buitinck.
- bug and style fixing in K-means algorithm by Jan Schlüter.
- Add attribute converged to Gaussian Mixture Models by Vincent Schut.
- Implemented
transform
,predict_log_proba
indiscriminant_analysis.LinearDiscriminantAnalysis
By Mathieu Blondel. - Refactoring in the Support Vector Machines module and bug fixes by Fabian Pedregosa, Gael Varoquaux and Amit Aides.
- Refactored SGD module (removed code duplication, better variable naming), added interface for sample weight by Peter Prettenhofer.
- Wrapped BallTree with Cython by Thouis (Ray) Jones.
- Added function
svm.l1_min_c
by Paolo Losi. - Typos, doc style, etc. by Yaroslav Halchenko, Gael Varoquaux, Olivier Grisel, Yann Malet, Nicolas Pinto, Lars Buitinck and Fabian Pedregosa.
People¶
People that made this release possible preceded by number of commits:
- 159 Olivier Grisel
- 96 Gael Varoquaux
- 96 Vlad Niculae
- 94 Fabian Pedregosa
- 36 Alexandre Gramfort
- 32 Paolo Losi
- 31 Edouard Duchesnay
- 30 Mathieu Blondel
- 25 Peter Prettenhofer
- 22 Nicolas Pinto
- 11 Virgile Fritsch
- 7 Lars Buitinck
- 6 Vincent Michel
- 5 Bertrand Thirion
- 4 Thouis (Ray) Jones
- 4 Vincent Schut
- 3 Jan Schlüter
- 2 Julien Miotte
- 2 Matthieu Perrot
- 2 Yann Malet
- 2 Yaroslav Halchenko
- 1 Amit Aides
- 1 Andreas Müller
- 1 Feth Arezki
- 1 Meng Xinfan
Version 0.7¶
March 2, 2011
scikit-learn 0.7 was released in March 2011, roughly three months after the 0.6 release. This release is marked by the speed improvements in existing algorithms like k-Nearest Neighbors and K-Means algorithm and by the inclusion of an efficient algorithm for computing the Ridge Generalized Cross Validation solution. Unlike the preceding release, no new modules where added to this release.
Changelog¶
- Performance improvements for Gaussian Mixture Model sampling [Jan Schlüter].
- Implementation of efficient leave-one-out cross-validated Ridge in
linear_model.RidgeCV
[Mathieu Blondel] - Better handling of collinearity and early stopping in
linear_model.lars_path
[Alexandre Gramfort and Fabian Pedregosa]. - Fixes for liblinear ordering of labels and sign of coefficients [Dan Yamins, Paolo Losi, Mathieu Blondel and Fabian Pedregosa].
- Performance improvements for Nearest Neighbors algorithm in high-dimensional spaces [Fabian Pedregosa].
- Performance improvements for
cluster.KMeans
[Gael Varoquaux and James Bergstra]. - Sanity checks for SVM-based classes [Mathieu Blondel].
- Refactoring of
neighbors.NeighborsClassifier
andneighbors.kneighbors_graph
: added different algorithms for the k-Nearest Neighbor Search and implemented a more stable algorithm for finding barycenter weights. Also added some developer documentation for this module, see notes_neighbors for more information [Fabian Pedregosa]. - Documentation improvements: Added
pca.RandomizedPCA
andlinear_model.LogisticRegression
to the class reference. Also added references of matrices used for clustering and other fixes [Gael Varoquaux, Fabian Pedregosa, Mathieu Blondel, Olivier Grisel, Virgile Fritsch , Emmanuelle Gouillart] - Binded decision_function in classes that make use of liblinear,
dense and sparse variants, like
svm.LinearSVC
orlinear_model.LogisticRegression
[Fabian Pedregosa]. - Performance and API improvements to
metrics.euclidean_distances
and topca.RandomizedPCA
[James Bergstra]. - Fix compilation issues under NetBSD [Kamel Ibn Hassen Derouiche]
- Allow input sequences of different lengths in
hmm.GaussianHMM
[Ron Weiss]. - Fix bug in affinity propagation caused by incorrect indexing [Xinfan Meng]
People¶
People that made this release possible preceded by number of commits:
- 85 Fabian Pedregosa
- 67 Mathieu Blondel
- 20 Alexandre Gramfort
- 19 James Bergstra
- 14 Dan Yamins
- 13 Olivier Grisel
- 12 Gael Varoquaux
- 4 Edouard Duchesnay
- 4 Ron Weiss
- 2 Satrajit Ghosh
- 2 Vincent Dubourg
- 1 Emmanuelle Gouillart
- 1 Kamel Ibn Hassen Derouiche
- 1 Paolo Losi
- 1 VirgileFritsch
- 1 Yaroslav Halchenko
- 1 Xinfan Meng
Version 0.6¶
December 21, 2010
scikit-learn 0.6 was released on December 2010. It is marked by the inclusion of several new modules and a general renaming of old ones. It is also marked by the inclusion of new example, including applications to real-world datasets.
Changelog¶
- New stochastic gradient descent module by Peter Prettenhofer. The module comes with complete documentation and examples.
- Improved svm module: memory consumption has been reduced by 50%, heuristic to automatically set class weights, possibility to assign weights to samples (see SVM: Weighted samples for an example).
- New Gaussian Processes module by Vincent Dubourg. This module also has great documentation and some very neat examples. See example_gaussian_process_plot_gp_regression.py or example_gaussian_process_plot_gp_probabilistic_classification_after_regression.py for a taste of what can be done.
- It is now possible to use liblinear’s Multi-class SVC (option
multi_class in
svm.LinearSVC
) - New features and performance improvements of text feature extraction.
- Improved sparse matrix support, both in main classes
(
grid_search.GridSearchCV
) as in modules sklearn.svm.sparse and sklearn.linear_model.sparse. - Lots of cool new examples and a new section that uses real-world datasets was created. These include: Faces recognition example using eigenfaces and SVMs, Species distribution modeling, Libsvm GUI, Wikipedia principal eigenvector and others.
- Faster Least Angle Regression algorithm. It is now 2x faster than the R version on worst case and up to 10x times faster on some cases.
- Faster coordinate descent algorithm. In particular, the full path
version of lasso (
linear_model.lasso_path
) is more than 200x times faster than before. - It is now possible to get probability estimates from a
linear_model.LogisticRegression
model. - module renaming: the glm module has been renamed to linear_model, the gmm module has been included into the more general mixture model and the sgd module has been included in linear_model.
- Lots of bug fixes and documentation improvements.
People¶
People that made this release possible preceded by number of commits:
- 207 Olivier Grisel
- 167 Fabian Pedregosa
- 97 Peter Prettenhofer
- 68 Alexandre Gramfort
- 59 Mathieu Blondel
- 55 Gael Varoquaux
- 33 Vincent Dubourg
- 21 Ron Weiss
- 9 Bertrand Thirion
- 3 Alexandre Passos
- 3 Anne-Laure Fouque
- 2 Ronan Amicel
- 1 Christian Osendorfer
Version 0.5¶
October 11, 2010
Changelog¶
New classes¶
- Support for sparse matrices in some classifiers of modules
svm
andlinear_model
(seesvm.sparse.SVC
,svm.sparse.SVR
,svm.sparse.LinearSVC
,linear_model.sparse.Lasso
,linear_model.sparse.ElasticNet
) - New
pipeline.Pipeline
object to compose different estimators. - Recursive Feature Elimination routines in module Feature selection.
- Addition of various classes capable of cross validation in the
linear_model module (
linear_model.LassoCV
,linear_model.ElasticNetCV
, etc.). - New, more efficient LARS algorithm implementation. The Lasso
variant of the algorithm is also implemented. See
linear_model.lars_path
,linear_model.Lars
andlinear_model.LassoLars
. - New Hidden Markov Models module (see classes
hmm.GaussianHMM
,hmm.MultinomialHMM
,hmm.GMMHMM
) - New module feature_extraction (see class reference)
- New FastICA algorithm in module sklearn.fastica
Documentation¶
- Improved documentation for many modules, now separating narrative documentation from the class reference. As an example, see documentation for the SVM module and the complete class reference.
Fixes¶
- API changes: adhere variable names to PEP-8, give more meaningful names.
- Fixes for svm module to run on a shared memory context (multiprocessing).
- It is again possible to generate latex (and thus PDF) from the sphinx docs.
Examples¶
- new examples using some of the mlcomp datasets:
sphx_glr_auto_examples_mlcomp_sparse_document_classification.py
(since removed) and Classification of text documents using sparse features - Many more examples. See here the full list of examples.
External dependencies¶
- Joblib is now a dependency of this package, although it is shipped with (sklearn.externals.joblib).
Removed modules¶
- Module ann (Artificial Neural Networks) has been removed from the distribution. Users wanting this sort of algorithms should take a look into pybrain.
Misc¶
- New sphinx theme for the web page.
Authors¶
The following is a list of authors for this release, preceded by number of commits:
- 262 Fabian Pedregosa
- 240 Gael Varoquaux
- 149 Alexandre Gramfort
- 116 Olivier Grisel
- 40 Vincent Michel
- 38 Ron Weiss
- 23 Matthieu Perrot
- 10 Bertrand Thirion
- 7 Yaroslav Halchenko
- 9 VirgileFritsch
- 6 Edouard Duchesnay
- 4 Mathieu Blondel
- 1 Ariel Rokem
- 1 Matthieu Brucher
Version 0.4¶
August 26, 2010
Changelog¶
Major changes in this release include:
- Coordinate Descent algorithm (Lasso, ElasticNet) refactoring & speed improvements (roughly 100x times faster).
- Coordinate Descent Refactoring (and bug fixing) for consistency with R’s package GLMNET.
- New metrics module.
- New GMM module contributed by Ron Weiss.
- Implementation of the LARS algorithm (without Lasso variant for now).
- feature_selection module redesign.
- Migration to GIT as version control system.
- Removal of obsolete attrselect module.
- Rename of private compiled extensions (added underscore).
- Removal of legacy unmaintained code.
- Documentation improvements (both docstring and rst).
- Improvement of the build system to (optionally) link with MKL. Also, provide a lite BLAS implementation in case no system-wide BLAS is found.
- Lots of new examples.
- Many, many bug fixes …
Authors¶
The committer list for this release is the following (preceded by number of commits):
- 143 Fabian Pedregosa
- 35 Alexandre Gramfort
- 34 Olivier Grisel
- 11 Gael Varoquaux
- 5 Yaroslav Halchenko
- 2 Vincent Michel
- 1 Chris Filo Gorgolewski
Earlier versions¶
Earlier versions included contributions by Fred Mailhot, David Cooke, David Huard, Dave Morrill, Ed Schofield, Travis Oliphant, Pearu Peterson.