Computation times#
22:53.704 total execution time for 278 files from all galleries:
Example |
Time |
Mem (MB) |
|---|---|---|
Model-based and sequential feature selection ( |
01:05.806 |
0.0 |
Evaluation of outlier detection estimators ( |
00:56.835 |
0.0 |
Selecting dimensionality reduction with Pipeline and GridSearchCV ( |
00:43.733 |
0.0 |
Comparing Random Forests and Histogram Gradient Boosting models ( |
00:43.474 |
0.0 |
Manifold learning on handwritten digits: Locally Linear Embedding, Isomap… ( |
00:42.997 |
0.0 |
Scalable learning with polynomial kernel approximation ( |
00:41.185 |
0.0 |
Post-hoc tuning the cut-off point of decision function ( |
00:35.350 |
0.0 |
Post-tuning the decision threshold for cost-sensitive learning ( |
00:33.020 |
0.0 |
Plotting Learning Curves and Checking Models’ Scalability ( |
00:32.877 |
0.0 |
Sample pipeline for text feature extraction and evaluation ( |
00:29.753 |
0.0 |
Early stopping of Stochastic Gradient Descent ( |
00:29.427 |
0.0 |
Comparing randomized search and grid search for hyperparameter estimation ( |
00:27.805 |
0.0 |
Image denoising using dictionary learning ( |
00:27.506 |
0.0 |
Partial Dependence and Individual Conditional Expectation Plots ( |
00:23.253 |
0.0 |
Comparison of Manifold Learning methods ( |
00:22.607 |
0.0 |
Combine predictors using stacking ( |
00:22.386 |
0.0 |
Comparing Target Encoder with Other Encoders ( |
00:21.865 |
0.0 |
Manifold Learning methods on a severed sphere ( |
00:21.806 |
0.0 |
Features in Histogram Gradient Boosting Trees ( |
00:20.613 |
0.0 |
Poisson regression and non-normal loss ( |
00:19.028 |
0.0 |
Scaling the regularization parameter for SVCs ( |
00:18.718 |
0.0 |
Release Highlights for scikit-learn 1.8 ( |
00:18.545 |
0.0 |
Swiss Roll And Swiss-Hole Reduction ( |
00:18.346 |
0.0 |
Balance model complexity and cross-validated score ( |
00:18.163 |
0.0 |
Overview of multiclass training meta-estimators ( |
00:18.118 |
0.0 |
Time-related feature engineering ( |
00:16.827 |
0.0 |
Release Highlights for scikit-learn 0.24 ( |
00:15.967 |
0.0 |
Prediction Latency ( |
00:14.681 |
0.0 |
Comparison of kernel ridge regression and SVR ( |
00:13.777 |
0.0 |
MNIST classification using multinomial logistic + L1 ( |
00:13.606 |
0.0 |
Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation ( |
00:12.688 |
0.0 |
Demo of HDBSCAN clustering algorithm ( |
00:12.398 |
0.0 |
Common pitfalls in the interpretation of coefficients of linear models ( |
00:11.722 |
0.0 |
Test with permutations the significance of a classification score ( |
00:11.688 |
0.0 |
Prediction Intervals for Gradient Boosting Regression ( |
00:10.855 |
0.0 |
Lagged features for time series forecasting ( |
00:10.730 |
0.0 |
The Johnson-Lindenstrauss bound for embedding with random projections ( |
00:10.622 |
0.0 |
Tweedie regression on insurance claims ( |
00:10.439 |
0.0 |
Visualization of MLP weights on MNIST ( |
00:10.212 |
0.0 |
Custom refit strategy of a grid search with cross-validation ( |
00:10.207 |
0.0 |
Gradient Boosting Out-of-Bag estimates ( |
00:09.700 |
0.0 |
Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV ( |
00:09.564 |
0.0 |
Permutation Importance with Multicollinear or Correlated Features ( |
00:08.771 |
0.0 |
Normal, Ledoit-Wolf and OAS Linear Discriminant Analysis for classification ( |
00:08.370 |
0.0 |
Out-of-core classification of text documents ( |
00:08.214 |
0.0 |
Imputing missing values before building an estimator ( |
00:08.137 |
0.0 |
Faces dataset decompositions ( |
00:07.743 |
0.0 |
Biclustering documents with the Spectral Co-clustering algorithm ( |
00:07.734 |
0.0 |
Image denoising using kernel PCA ( |
00:07.614 |
0.0 |
Gradient Boosting regularization ( |
00:07.571 |
0.0 |
Clustering text documents using k-means ( |
00:07.567 |
0.0 |
Permutation Importance vs Random Forest Feature Importance (MDI) ( |
00:07.447 |
0.0 |
Concentration Prior Type Analysis of Variation Bayesian Gaussian Mixture ( |
00:07.441 |
0.0 |
Compare the effect of different scalers on data with outliers ( |
00:07.439 |
0.0 |
Comparison between grid search and successive halving ( |
00:07.351 |
0.0 |
Release Highlights for scikit-learn 1.2 ( |
00:07.091 |
0.0 |
Faces recognition example using eigenfaces and SVMs ( |
00:06.994 |
0.0 |
Species distribution modeling ( |
00:06.868 |
0.0 |
Nested versus non-nested cross-validation ( |
00:06.650 |
0.0 |
Plot the decision surfaces of ensembles of trees on the iris dataset ( |
00:06.617 |
0.0 |
Classification of text documents using sparse features ( |
00:06.393 |
0.0 |
Successive Halving Iterations ( |
00:06.306 |
0.0 |
Imputing missing values with variants of IterativeImputer ( |
00:06.063 |
0.0 |
Model Complexity Influence ( |
00:06.007 |
0.0 |
Ability of Gaussian process regression (GPR) to estimate data noise-level ( |
00:05.724 |
0.0 |
Support Vector Regression (SVR) using linear and non-linear kernels ( |
00:05.689 |
0.0 |
Comparing different clustering algorithms on toy datasets ( |
00:05.572 |
0.0 |
Categorical Feature Support in Gradient Boosting ( |
00:05.532 |
0.0 |
Gaussian process classification (GPC) on iris dataset ( |
00:05.444 |
0.0 |
Effect of varying threshold for self-training ( |
00:05.334 |
0.0 |
Multiclass sparse logistic regression on 20newgroups ( |
00:05.278 |
0.0 |
Segmenting the picture of greek coins in regions ( |
00:05.263 |
0.0 |
Effect of model regularization on training and test error ( |
00:05.170 |
0.0 |
Semi-supervised Classification on a Text Dataset ( |
00:05.164 |
0.0 |
FeatureHasher and DictVectorizer Comparison ( |
00:05.127 |
0.0 |
Comparison of kernel ridge and Gaussian process regression ( |
00:05.003 |
0.0 |
RBF SVM parameters ( |
00:04.898 |
0.0 |
Forecasting of CO2 level on Mona Loa dataset using Gaussian process regression (GPR) ( |
00:04.736 |
0.0 |
Visualizing the stock market structure ( |
00:04.039 |
0.0 |
Multi-class AdaBoosted Decision Trees ( |
00:03.826 |
0.0 |
Kernel Density Estimation ( |
00:03.788 |
0.0 |
OOB Errors for Random Forests ( |
00:03.558 |
0.0 |
Model selection with Probabilistic PCA and Factor Analysis (FA) ( |
00:03.473 |
0.0 |
Kernel Density Estimate of Species Distributions ( |
00:03.420 |
0.0 |
Recursive feature elimination ( |
00:03.341 |
0.0 |
Comparing anomaly detection algorithms for outlier detection on toy datasets ( |
00:03.279 |
0.0 |
Feature discretization ( |
00:03.276 |
0.0 |
Compare BIRCH and MiniBatchKMeans ( |
00:03.205 |
0.0 |
Comparison of Calibration of Classifiers ( |
00:03.184 |
0.0 |
A demo of K-Means clustering on the handwritten digits data ( |
00:02.990 |
0.0 |
t-SNE: The effect of various perplexity values on the shape ( |
00:02.950 |
0.0 |
Inductive Clustering ( |
00:02.874 |
0.0 |
Early stopping in Gradient Boosting ( |
00:02.870 |
0.0 |
Robust vs Empirical covariance estimate ( |
00:02.731 |
0.0 |
Compare Stochastic learning strategies for MLPClassifier ( |
00:02.676 |
0.0 |
Release Highlights for scikit-learn 1.4 ( |
00:02.634 |
0.0 |
Probability Calibration curves ( |
00:02.562 |
0.0 |
Plot classification probability ( |
00:02.482 |
0.0 |
Column Transformer with Heterogeneous Data Sources ( |
00:02.460 |
0.0 |
Advanced Plotting With Partial Dependence ( |
00:02.437 |
0.0 |
Feature transformations with ensembles of trees ( |
00:02.373 |
0.0 |
Fitting an Elastic Net with a precomputed Gram Matrix and Weighted Samples ( |
00:02.339 |
0.0 |
Restricted Boltzmann Machine features for digit classification ( |
00:02.205 |
0.0 |
Ledoit-Wolf vs OAS estimation ( |
00:02.188 |
0.0 |
Probabilistic predictions with Gaussian process classification (GPC) ( |
00:02.140 |
0.0 |
Release Highlights for scikit-learn 0.22 ( |
00:02.130 |
0.0 |
Vector Quantization Example ( |
00:02.096 |
0.0 |
Hierarchical clustering with and without structure ( |
00:02.094 |
0.0 |
Classifier comparison ( |
00:02.067 |
0.0 |
Principal Component Analysis (PCA) on Iris Dataset ( |
00:02.060 |
0.0 |
Online learning of a dictionary of parts of faces ( |
00:01.934 |
0.0 |
Failure of Machine Learning to infer causal effects ( |
00:01.928 |
0.0 |
Robust linear estimator fitting ( |
00:01.870 |
0.0 |
Face completion with a multi-output estimators ( |
00:01.826 |
0.0 |
Multilabel classification using a classifier chain ( |
00:01.792 |
0.0 |
Comparing different hierarchical linkage methods on toy datasets ( |
00:01.780 |
0.0 |
Varying regularization in Multi-layer Perceptron ( |
00:01.736 |
0.0 |
Dimensionality Reduction with Neighborhood Components Analysis ( |
00:01.730 |
0.0 |
Feature importances with a forest of trees ( |
00:01.722 |
0.0 |
Class Likelihood Ratios to measure classification performance ( |
00:01.721 |
0.0 |
Statistical comparison of models using grid search ( |
00:01.701 |
0.0 |
Map data to a normal distribution ( |
00:01.647 |
0.0 |
Explicit feature map approximation for RBF kernels ( |
00:01.630 |
0.0 |
Gaussian Mixture Model Selection ( |
00:01.557 |
0.0 |
Various Agglomerative Clustering on a 2D embedding of digits ( |
00:01.491 |
0.0 |
Demo of OPTICS clustering algorithm ( |
00:01.489 |
0.0 |
Release Highlights for scikit-learn 1.3 ( |
00:01.475 |
0.0 |
Illustration of prior and posterior Gaussian process for different kernels ( |
00:01.356 |
0.0 |
Compressive sensing: tomography reconstruction with L1 prior (Lasso) ( |
00:01.304 |
0.0 |
Gradient Boosting regression ( |
00:01.247 |
0.0 |
Probability Calibration for 3-class classification ( |
00:01.240 |
0.0 |
Effect of transforming the targets in regression model ( |
00:01.218 |
0.0 |
Plot classification boundaries with different SVM Kernels ( |
00:01.188 |
0.0 |
Release Highlights for scikit-learn 1.5 ( |
00:01.176 |
0.0 |
Column Transformer with Mixed Types ( |
00:01.160 |
0.0 |
Pipelining: chaining a PCA and a logistic regression ( |
00:01.145 |
0.0 |
Single estimator versus bagging: bias-variance decomposition ( |
00:01.126 |
0.0 |
Visualizing cross-validation behavior in scikit-learn ( |
00:01.112 |
0.0 |
Bisecting K-Means and Regular K-Means Performance Comparison ( |
00:01.091 |
0.0 |
Lasso on dense and sparse data ( |
00:01.084 |
0.0 |
Agglomerative clustering with different metrics ( |
00:01.056 |
0.0 |
Empirical evaluation of the impact of k-means initialization ( |
00:01.035 |
0.0 |
SVM Tie Breaking Example ( |
00:00.978 |
0.0 |
Release Highlights for scikit-learn 1.1 ( |
00:00.973 |
0.0 |
Demonstration of k-means assumptions ( |
00:00.968 |
0.0 |
Adjustment for chance in clustering performance evaluation ( |
00:00.967 |
0.0 |
Importance of Feature Scaling ( |
00:00.934 |
0.0 |
Recursive feature elimination with cross-validation ( |
00:00.917 |
0.0 |
Selecting the number of clusters with silhouette analysis on KMeans clustering ( |
00:00.905 |
0.0 |
Plot individual and voting regression predictions ( |
00:00.828 |
0.0 |
Plot the decision surface of decision trees trained on the iris dataset ( |
00:00.803 |
0.0 |
Decision boundary of semi-supervised classifiers versus SVM on the Iris dataset ( |
00:00.802 |
0.0 |
Lasso model selection: AIC-BIC / cross-validation ( |
00:00.760 |
0.0 |
Caching nearest neighbors ( |
00:00.755 |
0.0 |
Visualizing the probabilistic predictions of a VotingClassifier ( |
00:00.727 |
0.0 |
Novelty detection with Local Outlier Factor (LOF) ( |
00:00.715 |
0.0 |
Two-class AdaBoost ( |
00:00.694 |
0.0 |
Lasso, Lasso-LARS, and Elastic Net paths ( |
00:00.685 |
0.0 |
GMM Initialization Methods ( |
00:00.647 |
0.0 |
Comparing Linear Bayesian Regressors ( |
00:00.623 |
0.0 |
Ridge coefficients as a function of the L2 Regularization ( |
00:00.623 |
0.0 |
Sparse inverse covariance estimation ( |
00:00.581 |
0.0 |
Monotonic Constraints ( |
00:00.570 |
0.0 |
Release Highlights for scikit-learn 0.23 ( |
00:00.562 |
0.0 |
Multiclass Receiver Operating Characteristic (ROC) ( |
00:00.561 |
0.0 |
Demonstrating the different strategies of KBinsDiscretizer ( |
00:00.553 |
0.0 |
Kernel PCA ( |
00:00.539 |
0.0 |
Comparing random forests and the multi-output meta estimator ( |
00:00.534 |
0.0 |
Quantile regression ( |
00:00.514 |
0.0 |
Theil-Sen Regression ( |
00:00.513 |
0.0 |
Decision Boundaries of Multinomial and One-vs-Rest Logistic Regression ( |
00:00.495 |
0.0 |
Linear and Quadratic Discriminant Analysis with covariance ellipsoid ( |
00:00.482 |
0.0 |
Simple 1D Kernel Density Estimation ( |
00:00.470 |
0.0 |
Gaussian Mixture Model Sine Curve ( |
00:00.462 |
0.0 |
Feature agglomeration vs. univariate selection ( |
00:00.461 |
0.0 |
Principal Component Regression vs Partial Least Squares Regression ( |
00:00.456 |
0.0 |
Outlier detection on a real data set ( |
00:00.445 |
0.0 |
Gaussian Processes regression: basic introductory example ( |
00:00.440 |
0.0 |
Concatenating multiple feature extraction methods ( |
00:00.436 |
0.0 |
Illustration of Gaussian process classification (GPC) on the XOR dataset ( |
00:00.428 |
0.0 |
A demo of the mean-shift clustering algorithm ( |
00:00.425 |
0.0 |
A demo of the Spectral Biclustering algorithm ( |
00:00.425 |
0.0 |
Label Propagation digits: Active learning ( |
00:00.418 |
0.0 |
L1 Penalty and Sparsity in Logistic Regression ( |
00:00.413 |
0.0 |
Post pruning decision trees with cost complexity pruning ( |
00:00.408 |
0.0 |
Probability calibration of classifiers ( |
00:00.406 |
0.0 |
IsolationForest example ( |
00:00.406 |
0.0 |
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood ( |
00:00.402 |
0.0 |
Spectral clustering for image segmentation ( |
00:00.401 |
0.0 |
L1-based models for Sparse Signals ( |
00:00.395 |
0.0 |
Decision Tree Regression with AdaBoost ( |
00:00.394 |
0.0 |
Polynomial and Spline interpolation ( |
00:00.379 |
0.0 |
Ordinary Least Squares and Ridge Regression ( |
00:00.369 |
0.0 |
Recognizing hand-written digits ( |
00:00.366 |
0.0 |
Factor Analysis (with rotation) to visualize patterns ( |
00:00.366 |
0.0 |
One-Class SVM versus One-Class SVM using Stochastic Gradient Descent ( |
00:00.362 |
0.0 |
Blind source separation using FastICA ( |
00:00.338 |
0.0 |
FastICA on 2D point clouds ( |
00:00.338 |
0.0 |
A demo of structured Ward hierarchical clustering on an image of coins ( |
00:00.335 |
0.0 |
Hashing feature transformation using Totally Random Trees ( |
00:00.322 |
0.0 |
Precision-Recall ( |
00:00.320 |
0.0 |
Target Encoder’s Internal Cross fitting ( |
00:00.320 |
0.0 |
Plot Ridge coefficients as a function of the regularization ( |
00:00.310 |
0.0 |
Decision Tree Regression ( |
00:00.300 |
0.0 |
SVM-Anova: SVM with univariate feature selection ( |
00:00.297 |
0.0 |
Curve Fitting with Bayesian Ridge Regression ( |
00:00.297 |
0.0 |
Robust covariance estimation and Mahalanobis distances relevance ( |
00:00.263 |
0.0 |
SVM: Weighted samples ( |
00:00.260 |
0.0 |
A demo of the Spectral Co-Clustering algorithm ( |
00:00.260 |
0.0 |
Visualizations with Display Objects ( |
00:00.259 |
0.0 |
Label Propagation digits: Demonstrating performance ( |
00:00.258 |
0.0 |
GMM covariances ( |
00:00.254 |
0.0 |
Demo of affinity propagation clustering algorithm ( |
00:00.254 |
0.0 |
Sparse coding with a precomputed dictionary ( |
00:00.247 |
0.0 |
SGD: Penalties ( |
00:00.245 |
0.0 |
Evaluate the performance of a classifier with Confusion Matrix ( |
00:00.242 |
0.0 |
Comparison of F-test and mutual information ( |
00:00.225 |
0.0 |
Gaussian Mixture Model Ellipsoids ( |
00:00.216 |
0.0 |
Detection error tradeoff (DET) curve ( |
00:00.207 |
0.0 |
Incremental PCA ( |
00:00.198 |
0.0 |
Nearest Neighbors Classification ( |
00:00.193 |
0.0 |
Using KBinsDiscretizer to discretize continuous features ( |
00:00.183 |
0.0 |
Multi-dimensional scaling ( |
00:00.182 |
0.0 |
Nearest Neighbors regression ( |
00:00.180 |
0.0 |
Joint feature selection with multi-task Lasso ( |
00:00.179 |
0.0 |
Gaussian processes on discrete data structures ( |
00:00.179 |
0.0 |
Underfitting vs. Overfitting ( |
00:00.178 |
0.0 |
Orthogonal Matching Pursuit ( |
00:00.174 |
0.0 |
Comparison of LDA and PCA 2D projection of Iris dataset ( |
00:00.174 |
0.0 |
Plot different SVM classifiers in the iris dataset ( |
00:00.172 |
0.0 |
Plotting Cross-Validated Predictions ( |
00:00.169 |
0.0 |
Univariate Feature Selection ( |
00:00.162 |
0.0 |
Compare cross decomposition methods ( |
00:00.162 |
0.0 |
Receiver Operating Characteristic (ROC) with cross validation ( |
00:00.162 |
0.0 |
Comparison of the K-Means and MiniBatchKMeans clustering algorithms ( |
00:00.160 |
0.0 |
Multilabel classification ( |
00:00.154 |
0.0 |
Comparing Nearest Neighbors with and without Neighborhood Components Analysis ( |
00:00.152 |
0.0 |
Plot the support vectors in LinearSVC ( |
00:00.152 |
0.0 |
SVM: Separating hyperplane for unbalanced classes ( |
00:00.146 |
0.0 |
Release Highlights for scikit-learn 1.7 ( |
00:00.144 |
0.0 |
Demo of DBSCAN clustering algorithm ( |
00:00.142 |
0.0 |
Regularization path of L1- Logistic Regression ( |
00:00.140 |
0.0 |
Displaying Pipelines ( |
00:00.139 |
0.0 |
Nearest Centroid Classification ( |
00:00.135 |
0.0 |
ROC Curve with Visualization API ( |
00:00.133 |
0.0 |
Label Propagation circles: Learning a complex structure ( |
00:00.131 |
0.0 |
Introducing the set_output API ( |
00:00.127 |
0.0 |
Neighborhood Components Analysis Illustration ( |
00:00.124 |
0.0 |
Density Estimation for a Gaussian mixture ( |
00:00.121 |
0.0 |
Plot randomly generated multilabel dataset ( |
00:00.118 |
0.0 |
One-class SVM with non-linear kernel (RBF) ( |
00:00.118 |
0.0 |
Iso-probability lines for Gaussian Processes classification (GPC) ( |
00:00.117 |
0.0 |
Isotonic Regression ( |
00:00.117 |
0.0 |
Understanding the decision tree structure ( |
00:00.114 |
0.0 |
Feature agglomeration ( |
00:00.107 |
0.0 |
Release Highlights for scikit-learn 1.6 ( |
00:00.106 |
0.0 |
Plot multi-class SGD on the iris dataset ( |
00:00.092 |
0.0 |
HuberRegressor vs Ridge on dataset with strong outliers ( |
00:00.088 |
0.0 |
SGD: convex loss functions ( |
00:00.086 |
0.0 |
Robust linear model estimation using RANSAC ( |
00:00.081 |
0.0 |
Lasso model selection via information criteria ( |
00:00.081 |
0.0 |
SVM with custom kernel ( |
00:00.077 |
0.0 |
Plot Hierarchical Clustering Dendrogram ( |
00:00.073 |
0.0 |
Outlier detection with Local Outlier Factor (LOF) ( |
00:00.066 |
0.0 |
SGD: Weighted samples ( |
00:00.063 |
0.0 |
An example of K-Means++ initialization ( |
00:00.060 |
0.0 |
SGD: Maximum margin separating hyperplane ( |
00:00.059 |
0.0 |
SVM: Maximum margin separating hyperplane ( |
00:00.053 |
0.0 |
SVM Margins Example ( |
00:00.052 |
0.0 |
Non-negative least squares ( |
00:00.051 |
0.0 |
Metadata Routing ( |
00:00.050 |
0.0 |
Displaying estimators and complex pipelines ( |
00:00.027 |
0.0 |
Examples of Using FrozenEstimator ( |
00:00.017 |
0.0 |
Release Highlights for scikit-learn 1.0 ( |
00:00.016 |
0.0 |
Pipeline ANOVA SVM ( |
00:00.016 |
0.0 |
Wikipedia principal eigenvector ( |
00:00.000 |
0.0 |
__sklearn_is_fitted__ as Developer API ( |
00:00.000 |
0.0 |
Approximate nearest neighbors in TSNE ( |
00:00.000 |
0.0 |