Version 1.9#

Legend for changelogs

  • Major Feature something big that you couldn’t do before.

  • Feature something that you couldn’t do before.

  • Efficiency an existing feature now may not require as much computation or memory.

  • Enhancement a miscellaneous minor improvement.

  • Fix something that previously didn’t work as documented – or according to reasonable expectations – should now work.

  • API Change you will need to change your code to have the same effect in the future; or a feature will be removed in the future.

Version 1.9.dev0#

January 2026

Changes impacting many modules#

  • Enhancement pipeline.Pipeline, pipeline.FeatureUnion and compose.ColumnTransformer now raise a clearer error message when an estimator class is passed instead of an instance. By Anne Beyer #32888

  • Fix Raise ValueError when sample_weight contains only zero values to prevent meaningless input data during fitting. This change applies to all estimators that support the parameter sample_weight. This change also affects metrics that validate sample weights. By Lucy Liu and John Hendricks. #32212

  • Fix Some parameter descriptions in the HTML representation of estimators were not properly escaped, which could lead to malformed HTML if the description contains characters like < or >. By Olivier Grisel. #32942

Support for Array API#

Additional estimators and functions have been updated to include support for all Array API compliant inputs.

See Array API support (experimental) for more details.

sklearn.compose#

sklearn.ensemble#

sklearn.linear_model#

  • Enhancement linear_model.ElasticNet, linear_model.ElasticNetCV and linear_model.enet_path now are able to fit Ridge regression, i.e. setting l1_ratio=0. Before this PR, the stopping criterion was a formulation of the dual gap that breaks down for l1_ratio=0. Now, an alternative dual gap formulation is used for this setting. This reduces the noise of raised warnings. By Christian Lorentzen. #32845

  • Fix linear_model.LassoCV and linear_model.ElasticNetCV now take the positive parameter into account to compute the maximum alpha parameter, where all coefficients are zero. This impacts the search grid for the internally tuned alpha hyper-parameter stored in the attribute alphas_. By Junteng Li #32768

  • Fix Correct the formulation of alpha within linear_model.SGDOneClassSVM. The corrected value is alpha = nu instead of alpha = nu / 2. Note: This might result in changed values for the fitted attributes like coef_ and offset_ as well as the predictions made using this class. By Omar Salman. #32778

sklearn.metrics#

  • Fix metrics.d2_pinball_score and metrics.d2_absolute_error_score now always use the "averaged_inverted_cdf" quantile method, both with and without sample weights. Previously, the "linear" quantile method was used only for the unweighted case leading the surprising discrepancies when comparing the results with unit weights. Note that all quantile interpolation methods are asymptotically equivalent in the large sample limit, but this fix can cause score value changes on small evaluation sets (without weights). By Virgil Chan. #31671

sklearn.svm#

sklearn.tree#

sklearn.utils#

  • Fix The parameter table in the HTML representation of all scikit-learn estimators inheritiging from base.BaseEstimator, displays each parameter documentation as a tooltip. The last tooltip of a parameter in the last table of any HTML representation was partially hidden. This issue has been fixed. By Dea María Léon #32887

Code and documentation contributors

Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1.8, including:

TODO: update at the time of the release.