Version 1.4

For a short description of the main highlights of the release, please refer to Release Highlights for scikit-learn 1.4.

Legend for changelogs

  • Major Feature something big that you couldn’t do before.

  • Feature something that you couldn’t do before.

  • Efficiency an existing feature now may not require as much computation or memory.

  • Enhancement a miscellaneous minor improvement.

  • Fix something that previously didn’t work as documented – or according to reasonable expectations – should now work.

  • API Change you will need to change your code to have the same effect in the future; or a feature will be removed in the future.

Version 1.4.2

April 2024

This release only includes support for numpy 2.

Version 1.4.1.post1

February 2024

Note

The 1.4.1.post1 release includes a packaging fix requiring numpy<2 to account for incompatibilities with NumPy 2.0 ABI. Note that the 1.4.1 release is not available on PyPI and conda-forge.

Metadata Routing

  • Fix Fix routing issue with ColumnTransformer when used inside another meta-estimator. #28188 by Adrin Jalali.

  • Fix No error is raised when no metadata is passed to a metaestimator that includes a sub-estimator which doesn’t support metadata routing. #28256 by Adrin Jalali.

DataFrame Support

  • Enhancement Fix Pandas and Polars dataframe are validated directly without ducktyping checks. #28195 by Thomas Fan.

Changes impacting many modules

Metadata Routing

Changelog

sklearn.calibration

sklearn.cluster

sklearn.compose

  • Fix compose.ColumnTransformer now transform into a polars dataframe when verbose_feature_names_out=True and the transformers internally used several times the same columns. Previously, it would raise a due to duplicated column names. #28262 by Guillaume Lemaitre.

sklearn.ensemble

  • Fix HistGradientBoostingClassifier and HistGradientBoostingRegressor when fitted on pandas DataFrame with extension dtypes, for example pd.Int64Dtype #28385 by Loïc Estève.

  • Fix Fixes error message raised by ensemble.VotingClassifier when the target is multilabel or multiclass-multioutput in a DataFrame format. #27702 by Guillaume Lemaitre.

sklearn.impute

sklearn.inspection

sklearn.linear_model

sklearn.preprocessing

sklearn.tree

sklearn.utils

Version 1.4.0

January 2024

Changed models

The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures.

  • Efficiency linear_model.LogisticRegression and linear_model.LogisticRegressionCV now have much better convergence for solvers "lbfgs" and "newton-cg". Both solvers can now reach much higher precision for the coefficients depending on the specified tol. Additionally, lbfgs can make better use of tol, i.e., stop sooner or reach higher precision. Note: The lbfgs is the default solver, so this change might effect many models. This change also means that with this new version of scikit-learn, the resulting coefficients coef_ and intercept_ of your models will change for these two solvers (when fit on the same data again). The amount of change depends on the specified tol, for small values you will get more precise results. #26721 by Christian Lorentzen.

  • Fix fixes a memory leak seen in PyPy for estimators using the Cython loss functions. #27670 by Guillaume Lemaitre.

Changes impacting all modules

  • Major Feature Transformers now support polars output with set_output(transform="polars"). #27315 by Thomas Fan.

  • Enhancement All estimators now recognizes the column names from any dataframe that adopts the DataFrame Interchange Protocol. Dataframes that return a correct representation through np.asarray(df) is expected to work with our estimators and functions. #26464 by Thomas Fan.

  • Enhancement The HTML representation of estimators now includes a link to the documentation and is color-coded to denote whether the estimator is fitted or not (unfitted estimators are orange, fitted estimators are blue). #26616 by Riccardo Cappuzzo, Ines Ibnukhsein, Gael Varoquaux, Joel Nothman and Lilian Boulard.

  • Fix Fixed a bug in most estimators and functions where setting a parameter to a large integer would cause a TypeError. #26648 by Naoise Holohan.

Metadata Routing

The following models now support metadata routing in one or more or their methods. Refer to the Metadata Routing User Guide for more details.

Support for SciPy sparse arrays

Several estimators are now supporting SciPy sparse arrays. The following functions and classes are impacted:

Functions:

Classes:

Support for Array API

Several estimators and functions support the Array API. Such changes allows for using the estimators and functions with other libraries such as JAX, CuPy, and PyTorch. This therefore enables some GPU-accelerated computations.

See Array API support (experimental) for more details.

Functions:

Classes:

Private Loss Function Module

Changelog

sklearn.base

sklearn.calibration

sklearn.cluster

sklearn.compose

sklearn.covariance

sklearn.datasets

sklearn.decomposition

sklearn.ensemble

sklearn.feature_extraction

sklearn.feature_selection

sklearn.inspection

sklearn.kernel_ridge

sklearn.linear_model

sklearn.metrics

sklearn.model_selection

sklearn.multioutput

sklearn.neighbors

sklearn.preprocessing

sklearn.tree

sklearn.utils

  • Enhancement sklearn.utils.estimator_html_repr dynamically adapts diagram colors based on the browser’s prefers-color-scheme, providing improved adaptability to dark mode environments. #26862 by Andrew Goh Yisheng, Thomas Fan, Adrin Jalali.

  • Enhancement MetadataRequest and MetadataRouter now have a consumes method which can be used to check whether a given set of parameters would be consumed. #26831 by Adrin Jalali.

  • Enhancement Make sklearn.utils.check_array attempt to output int32-indexed CSR and COO arrays when converting from DIA arrays if the number of non-zero entries is small enough. This ensures that estimators implemented in Cython and that do not accept int64-indexed sparse datastucture, now consistently accept the same sparse input formats for SciPy sparse matrices and arrays. #27372 by Guillaume Lemaitre.

  • Fix sklearn.utils.check_array should accept both matrix and array from the sparse SciPy module. The previous implementation would fail if copy=True by calling specific NumPy np.may_share_memory that does not work with SciPy sparse array and does not return the correct result for SciPy sparse matrix. #27336 by Guillaume Lemaitre.

  • Fix check_estimators_pickle with readonly_memmap=True now relies on joblib’s own capability to allocate aligned memory mapped arrays when loading a serialized estimator instead of calling a dedicated private function that would crash when OpenBLAS misdetects the CPU architecture. #27614 by Olivier Grisel.

  • Fix Error message in check_array when a sparse matrix was passed but accept_sparse is False now suggests to use .toarray() and not X.toarray(). #27757 by Lucy Liu.

  • Fix Fix the function check_array to output the right error message when the input is a Series instead of a DataFrame. #28090 by Stan Furrer and Yao Xiao.

  • API Change sklearn.extmath.log_logistic is deprecated and will be removed in 1.6. Use -np.logaddexp(0, -x) instead. #27544 by Christian Lorentzen.

Code and documentation contributors

Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1.3, including:

101AlexMartin, Abhishek Singh Kushwah, Adam Li, Adarsh Wase, Adrin Jalali, Advik Sinha, Alex, Alexander Al-Feghali, Alexis IMBERT, AlexL, Alex Molas, Anam Fatima, Andrew Goh, andyscanzio, Aniket Patil, Artem Kislovskiy, Arturo Amor, ashah002, avm19, Ben Holmes, Ben Mares, Benoit Chevallier-Mames, Bharat Raghunathan, Binesh Bannerjee, Brendan Lu, Brevin Kunde, Camille Troillard, Carlo Lemos, Chad Parmet, Christian Clauss, Christian Lorentzen, Christian Veenhuis, Christos Aridas, Cindy Liang, Claudio Salvatore Arcidiacono, Connor Boyle, cynthias13w, DaminK, Daniele Ongari, Daniel Schmitz, Daniel Tinoco, David Brochart, Deborah L. Haar, DevanshKyada27, Dimitri Papadopoulos Orfanos, Dmitry Nesterov, DUONG, Edoardo Abati, Eitan Hemed, Elabonga Atuo, Elisabeth Günther, Emma Carballal, Emmanuel Ferdman, epimorphic, Erwan Le Floch, Fabian Egli, Filip Karlo Došilović, Florian Idelberger, Franck Charras, Gael Varoquaux, Ganesh Tata, Gleb Levitski, Guillaume Lemaitre, Haoying Zhang, Harmanan Kohli, Ily, ioangatop, IsaacTrost, Isaac Virshup, Iwona Zdzieblo, Jakub Kaczmarzyk, James McDermott, Jarrod Millman, JB Mountford, Jérémie du Boisberranger, Jérôme Dockès, Jiawei Zhang, Joel Nothman, John Cant, John Hopfensperger, Jona Sassenhagen, Jon Nordby, Julien Jerphanion, Kennedy Waweru, kevin moore, Kian Eliasi, Kishan Ved, Konstantinos Pitas, Koustav Ghosh, Kushan Sharma, ldwy4, Linus, Lohit SundaramahaLingam, Loic Esteve, Lorenz, Louis Fouquet, Lucy Liu, Luis Silvestrin, Lukáš Folwarczný, Lukas Geiger, Malte Londschien, Marcus Fraaß, Marek Hanuš, Maren Westermann, Mark Elliot, Martin Larralde, Mateusz Sokół, mathurinm, mecopur, Meekail Zain, Michael Higgins, Miki Watanabe, Milton Gomez, MN193, Mohammed Hamdy, Mohit Joshi, mrastgoo, Naman Dhingra, Naoise Holohan, Narendra Singh dangi, Noa Malem-Shinitski, Nolan, Nurseit Kamchyev, Oleksii Kachaiev, Olivier Grisel, Omar Salman, partev, Peter Hull, Peter Steinbach, Pierre de Fréminville, Pooja Subramaniam, Puneeth K, qmarcou, Quentin Barthélemy, Rahil Parikh, Rahul Mahajan, Raj Pulapakura, Raphael, Ricardo Peres, Riccardo Cappuzzo, Roman Lutz, Salim Dohri, Samuel O. Ronsin, Sandip Dutta, Sayed Qaiser Ali, scaja, scikit-learn-bot, Sebastian Berg, Shreesha Kumar Bhat, Shubhal Gupta, Søren Fuglede Jørgensen, Stefanie Senger, Tamara, Tanjina Afroj, THARAK HEGDE, thebabush, Thomas J. Fan, Thomas Roehr, Tialo, Tim Head, tongyu, Venkatachalam N, Vijeth Moudgalya, Vincent M, Vivek Reddy P, Vladimir Fokow, Xiao Yuan, Xuefeng Xu, Yang Tao, Yao Xiao, Yuchen Zhou, Yuusuke Hiramatsu