April 14, 2015
Fix sorting of labels in func:
preprocessing.label_binarizeby Michael Heilman.
Fix a bug in
precompute_distances=Falseon fortran-ordered data.
Fix a regression where
utils.shuffleconverted lists and dataframes to arrays, by Olivier Grisel
March 26, 2015
Speed improvements (notably in
cluster.DBSCAN), reduced memory requirements, bug-fixes and better default settings.
Multinomial Logistic regression and a path algorithm in
Out-of core learning of PCA via
Probability calibration of classifiers using
cluster.Birchclustering method for large-scale datasets.
Scalable approximate nearest neighbors search with Locality-sensitive hashing forests in
Improved error messages and better validation when using malformed input data.
More robust integration with pandas dataframes.
neighbors.LSHForestimplements locality-sensitive hashing for approximate nearest neighbors search. By Maheshakya Wijewardena.
svm.LinearSVR. This class uses the liblinear implementation of Support Vector Regression which is much faster for large sample sizes than
svm.SVRwith linear kernel. By Fabian Pedregosa and Qiang Luo.
Incremental fit for
warm_startconstructor parameter to make it possible for any trained forest model to grow additional trees incrementally. By Laurent Direr.
cross_val_predictfunction which computes cross-validated estimates. By Luis Pedro Coelho
cross_validation.PredefinedSplitcross-validation for fixed user-provided cross-validation folds. By Thomas Unterthiner.
hierarchical.ward_treeto return distances between nodes for both structured and unstructured versions of the algorithm. By Matteo Visconti di Oleggio Castello. The same option was added in
hierarchical.linkage_tree. By Manoj Kumar
Support sparse multilabel indicator representation in
multiclass.OneVsRestClassifier(by Hamzeh Alsalhi with thanks to Rohit Sivaprasad), as well as evaluation metrics (by Joel Nothman).
Add support for multiclass in
labels=Noneas optional parameter. By
linear_model.LogisticRegressionto implement a Logistic Regression solver that minimizes the cross-entropy or multinomial loss instead of the default One-vs-Rest setting. Supports
newton-cgsolvers. By Lars Buitinck and Manoj Kumar. Solver option
newton-cgby Simon Wu.
DictVectorizercan now perform
fit_transformon an iterable in a single pass, when giving the option
sort=False. By Dan Blanchard.
RandomizedSearchCVcan now be configured to work with estimators that may fail and raise errors on individual folds. This option is controlled by the
error_scoreparameter. This does not affect errors raised on re-fit. By Michal Romaniuk.
metrics.classification_reportto allow report to show different precision of floating point numbers. By Ian Gilmore.
n_iter_attribute to estimators that accept a
max_iterattribute in their constructor. By Manoj Kumar.
Sparse support for
paired_distances. By Joel Nothman.
cluster.DBSCANnow supports sparse input and sample weights and has been optimized: the inner loop has been rewritten in Cython and radius neighbors queries are now computed in batch. By Joel Nothman and Lars Buitinck.
class_weightparameter to automatically weight samples by class frequency for
tree.ExtraTreeClassifier. By Trevor Stephens.
grid_search.RandomizedSearchCVnow does sampling without replacement if all parameters are given as lists. By Andreas Müller.
Parallelized calculation of
pairwise_distancesis now supported for scipy metrics and custom callables. By Joel Nothman.
Make the stopping criterion for
mixture.VBGMMless dependent on the number of samples by thresholding the average log-likelihood change instead of its sum over all samples. By Hervé Bredin.
cross_validation.train_test_splitnow preserves the input type, instead of converting to numpy arrays.
Added example of using
FeatureUnionfor heterogeneous input. By Matt Terry
Documentation on scorers was improved, to highlight the handling of loss functions. By Matt Pico.
A discrepancy between liblinear output and scikit-learn’s wrappers is now noted. By Manoj Kumar.
Improved documentation generation: examples referring to a class or function are now shown in a gallery on the class/function’s API reference page. By Joel Nothman.
More explicit documentation of sample generators and of data transformation. By Joel Nothman.
Added silhouette plots for analysis of KMeans clustering using
metrics.silhouette_score. See Selecting the number of clusters with silhouette analysis on KMeans clustering
Metaestimators now support ducktyping for the presence of
predict_probaand other methods. This fixes behavior of
feature_selection.RFECVwhen nested. By Joel Nothman
scoringattribute of grid-search and cross-validation methods is no longer ignored when a
grid_search.GridSearchCVis given as a base estimator or the base estimator doesn’t have predict.
hierarchical.ward_treenow returns the children in the same order for both the structured and unstructured versions. By Matteo Visconti di Oleggio Castello.
Fix incomplete download of the dataset when
datasets.download_20newsgroupsis called. By Manoj Kumar.
Various fixes to the Gaussian processes subpackage by Vincent Dubourg and Jan Hendrik Metzen.
class_weight=='auto'throws an appropriate error message and suggests a work around. By Danny Sullivan.
gamma=g/2.; the definition of
gammais now consistent, which may substantially change your results if you use a fixed value. (If you cross-validated over
gamma, it probably doesn’t matter too much.) By Dougal Sutherland.
Pipeline object delegate the
classes_attribute to the underlying estimator. It allows, for instance, to make bagging of a pipeline object. By Arnaud Joly
Fix numerical stability issues in
linear_model.SGDRegressorby clipping large gradients and ensuring that weight decay rescaling is always positive (for large l2 regularization and large learning rate values). By Olivier Grisel
compute_full_treeis set to “auto”, the full tree is built when n_clusters is high and is early stopped when n_clusters is low, while the behavior should be vice-versa in
cluster.AgglomerativeClustering(and friends). This has been fixed By Manoj Kumar
Avoid skipping the first nearest neighbor in the methods
sklearn.neighbors.NearestNeighborsand family, when the query data is not the same as fit data. By Manoj Kumar.
Fix log-density calculation in the
mixture.GMMwith tied covariance. By Will Dawson
Fixed round off errors with non positive-definite covariance matrices in GMM. By Alexis Mignon.
Flip sign of
svm.SVCto make it consistent with the documentation and
decision_function. By Artem Sobolev.
API changes summary¶
cross_val_scoreand other meta-estimators don’t convert pandas DataFrames into arrays any more, allowing DataFrame specific operations in custom estimators.
multiclass.predict_ecocare deprecated. Use the underlying estimators instead.
Nearest neighbors estimators used to take arbitrary keyword arguments and pass these to their distance metric. This will no longer be supported in scikit-learn 0.18; use the
n_jobsparameter of the fit method shifted to the constructor of the
multiclass.OneVsRestClassifiernow returns two probabilities per sample in the multiclass case; this is consistent with other estimators and with the method’s documentation, but previous versions accidentally returned only the positive probability. Fixed by Will Lamond and Lars Buitinck.
Change default value of precompute in
Lassoto False. Setting precompute to “auto” was found to be slower when n_samples > n_features since the computation of the Gram matrix is computationally expensive and outweighs the benefit of fitting the Gram for just one alpha.
precompute="auto"is now deprecated and will be removed in 0.18 By Manoj Kumar.
Users should now supply an explicit
sklearn.metrics.precision_scorewhen performing multiclass or multilabel (i.e. not binary) classification. By Joel Nothman.
scoringparameter for cross validation now accepts
'f1'is now for binary classification only. Similar changes apply to
'recall'. By Joel Nothman.
From now onwards, all estimators will uniformly raise
utils.validation.NotFittedError), when any of the
predictlike methods are called before the model is fit. By Raghav RV.
Input data validation was refactored for more consistent input validation. The
check_arraysfunction was replaced by
check_X_y. By Andreas Müller.
X=Nonein the methods
sklearn.neighbors.NearestNeighborsand family. If set to None, then for every sample this avoids setting the sample itself as the first nearest neighbor. By Manoj Kumar.
neighbors.radius_neighbors_graphwhich has to be explicitly set by the user. If set to True, then the sample itself is considered as the first nearest neighbor.
threshparameter is deprecated in favor of new
Enhancementssection for details. By Hervé Bredin.
Estimators will treat input with dtype object as numeric when possible. By Andreas Müller
Estimators now raise
ValueErrorconsistently when fitted on empty data (less than 1 sample or less than 1 feature for 2D input). By Olivier Grisel.
linear_model.PassiveAggressiveRegressornow defaults to
A. Flaxman, Aaron Schumacher, Aaron Staple, abhishek thakur, Akshay, akshayah3, Aldrian Obaja, Alexander Fabisch, Alexandre Gramfort, Alexis Mignon, Anders Aagaard, Andreas Mueller, Andreas van Cranenburgh, Andrew Tulloch, Andrew Walker, Antony Lee, Arnaud Joly, banilo, Barmaley.exe, Ben Davies, Benedikt Koehler, bhsu, Boris Feld, Borja Ayerdi, Boyuan Deng, Brent Pedersen, Brian Wignall, Brooke Osborn, Calvin Giles, Cathy Deng, Celeo, cgohlke, chebee7i, Christian Stade-Schuldt, Christof Angermueller, Chyi-Kwei Yau, CJ Carey, Clemens Brunner, Daiki Aminaka, Dan Blanchard, danfrankj, Danny Sullivan, David Fletcher, Dmitrijs Milajevs, Dougal J. Sutherland, Erich Schubert, Fabian Pedregosa, Florian Wilhelm, floydsoft, Félix-Antoine Fortin, Gael Varoquaux, Garrett-R, Gilles Louppe, gpassino, gwulfs, Hampus Bengtsson, Hamzeh Alsalhi, Hanna Wallach, Harry Mavroforakis, Hasil Sharma, Helder, Herve Bredin, Hsiang-Fu Yu, Hugues SALAMIN, Ian Gilmore, Ilambharathi Kanniah, Imran Haque, isms, Jake VanderPlas, Jan Dlabal, Jan Hendrik Metzen, Jatin Shah, Javier López Peña, jdcaballero, Jean Kossaifi, Jeff Hammerbacher, Joel Nothman, Jonathan Helmus, Joseph, Kaicheng Zhang, Kevin Markham, Kyle Beauchamp, Kyle Kastner, Lagacherie Matthieu, Lars Buitinck, Laurent Direr, leepei, Loic Esteve, Luis Pedro Coelho, Lukas Michelbacher, maheshakya, Manoj Kumar, Manuel, Mario Michael Krell, Martin, Martin Billinger, Martin Ku, Mateusz Susik, Mathieu Blondel, Matt Pico, Matt Terry, Matteo Visconti dOC, Matti Lyra, Max Linke, Mehdi Cherti, Michael Bommarito, Michael Eickenberg, Michal Romaniuk, MLG, mr.Shu, Nelle Varoquaux, Nicola Montecchio, Nicolas, Nikolay Mayorov, Noel Dawe, Okal Billy, Olivier Grisel, Óscar Nájera, Paolo Puggioni, Peter Prettenhofer, Pratap Vardhan, pvnguyen, queqichao, Rafael Carrascosa, Raghav R V, Rahiel Kasim, Randall Mason, Rob Zinkov, Robert Bradshaw, Saket Choudhary, Sam Nicholls, Samuel Charron, Saurabh Jha, sethdandridge, sinhrks, snuderl, Stefan Otte, Stefan van der Walt, Steve Tjoa, swu, Sylvain Zimmer, tejesh95, terrycojones, Thomas Delteil, Thomas Unterthiner, Tomas Kazmar, trevorstephens, tttthomasssss, Tzu-Ming Kuo, ugurcaliskan, ugurthemaster, Vinayak Mehta, Vincent Dubourg, Vjacheslav Murashkin, Vlad Niculae, wadawson, Wei Xue, Will Lamond, Wu Jiang, x0l, Xinfan Meng, Yan Yi, Yu-Chin