Computation times#
23:44.999 total execution time for 293 files from all galleries:
Example |
Time |
Mem (MB) |
---|---|---|
Release Highlights for scikit-learn 0.24 ( |
01:14.219 |
0.0 |
Post-tuning the decision threshold for cost-sensitive learning ( |
01:00.760 |
0.0 |
Evaluation of outlier detection estimators ( |
00:59.575 |
0.0 |
Comparing Random Forests and Histogram Gradient Boosting models ( |
00:59.036 |
0.0 |
Selecting dimensionality reduction with Pipeline and GridSearchCV ( |
00:42.494 |
0.0 |
Comparing Target Encoder with Other Encoders ( |
00:39.551 |
0.0 |
Model-based and sequential feature selection ( |
00:33.623 |
0.0 |
Early stopping of Stochastic Gradient Descent ( |
00:31.447 |
0.0 |
Image denoising using dictionary learning ( |
00:31.134 |
0.0 |
Post-hoc tuning the cut-off point of decision function ( |
00:30.948 |
0.0 |
Out-of-core classification of text documents ( |
00:28.314 |
0.0 |
Sample pipeline for text feature extraction and evaluation ( |
00:28.004 |
0.0 |
Faces recognition example using eigenfaces and SVMs ( |
00:27.921 |
0.0 |
Combine predictors using stacking ( |
00:27.697 |
0.0 |
Poisson regression and non-normal loss ( |
00:25.986 |
0.0 |
Partial Dependence and Individual Conditional Expectation Plots ( |
00:25.696 |
0.0 |
Features in Histogram Gradient Boosting Trees ( |
00:23.776 |
0.0 |
Plotting Learning Curves and Checking Models’ Scalability ( |
00:22.965 |
0.0 |
Scalable learning with polynomial kernel approximation ( |
00:21.358 |
0.0 |
Overview of multiclass training meta-estimators ( |
00:20.235 |
0.0 |
Model Complexity Influence ( |
00:19.112 |
0.0 |
Swiss Roll And Swiss-Hole Reduction ( |
00:16.909 |
0.0 |
Common pitfalls in the interpretation of coefficients of linear models ( |
00:16.874 |
0.0 |
Prediction Latency ( |
00:16.640 |
0.0 |
Manifold learning on handwritten digits: Locally Linear Embedding, Isomap… ( |
00:15.826 |
0.0 |
Scaling the regularization parameter for SVCs ( |
00:15.714 |
0.0 |
Biclustering documents with the Spectral Co-clustering algorithm ( |
00:14.718 |
0.0 |
Demo of HDBSCAN clustering algorithm ( |
00:14.312 |
0.0 |
Time-related feature engineering ( |
00:13.689 |
0.0 |
Comparison of Manifold Learning methods ( |
00:13.437 |
0.0 |
Image denoising using kernel PCA ( |
00:12.535 |
0.0 |
Lagged features for time series forecasting ( |
00:11.813 |
0.0 |
Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation ( |
00:11.487 |
0.0 |
Test with permutations the significance of a classification score ( |
00:11.445 |
0.0 |
Tweedie regression on insurance claims ( |
00:11.224 |
0.0 |
Species distribution modeling ( |
00:10.610 |
0.0 |
The Johnson-Lindenstrauss bound for embedding with random projections ( |
00:10.542 |
0.0 |
MNIST classification using multinomial logistic + L1 ( |
00:10.214 |
0.0 |
Custom refit strategy of a grid search with cross-validation ( |
00:09.684 |
0.0 |
Compressive sensing: tomography reconstruction with L1 prior (Lasso) ( |
00:09.621 |
0.0 |
Prediction Intervals for Gradient Boosting Regression ( |
00:09.440 |
0.0 |
Gradient Boosting Out-of-Bag estimates ( |
00:09.435 |
0.0 |
Faces dataset decompositions ( |
00:09.132 |
0.0 |
Imputing missing values before building an estimator ( |
00:08.392 |
0.0 |
Comparing various online solvers ( |
00:08.259 |
0.0 |
Gradient Boosting regularization ( |
00:08.129 |
0.0 |
Normal, Ledoit-Wolf and OAS Linear Discriminant Analysis for classification ( |
00:08.053 |
0.0 |
Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV ( |
00:07.960 |
0.0 |
Compare the effect of different scalers on data with outliers ( |
00:07.905 |
0.0 |
Comparison of kernel ridge regression and SVR ( |
00:07.782 |
0.0 |
Multilabel classification using a classifier chain ( |
00:07.565 |
0.0 |
Comparison between grid search and successive halving ( |
00:07.098 |
0.0 |
Clustering text documents using k-means ( |
00:07.071 |
0.0 |
Visualization of MLP weights on MNIST ( |
00:06.609 |
0.0 |
Semi-supervised Classification on a Text Dataset ( |
00:06.589 |
0.0 |
Train error vs Test error ( |
00:06.513 |
0.0 |
Plot the decision surfaces of ensembles of trees on the iris dataset ( |
00:06.424 |
0.0 |
Multiclass sparse logistic regression on 20newgroups ( |
00:06.318 |
0.0 |
Ability of Gaussian process regression (GPR) to estimate data noise-level ( |
00:06.278 |
0.0 |
Classification of text documents using sparse features ( |
00:06.267 |
0.0 |
Nested versus non-nested cross-validation ( |
00:06.172 |
0.0 |
Release Highlights for scikit-learn 1.2 ( |
00:06.168 |
0.0 |
Forecasting of CO2 level on Mona Loa dataset using Gaussian process regression (GPR) ( |
00:06.106 |
0.0 |
Segmenting the picture of greek coins in regions ( |
00:06.074 |
0.0 |
Categorical Feature Support in Gradient Boosting ( |
00:06.017 |
0.0 |
Concentration Prior Type Analysis of Variation Bayesian Gaussian Mixture ( |
00:05.935 |
0.0 |
Comparing different clustering algorithms on toy datasets ( |
00:05.853 |
0.0 |
Manifold Learning methods on a severed sphere ( |
00:05.743 |
0.0 |
Visualizing the stock market structure ( |
00:05.713 |
0.0 |
SVM Exercise ( |
00:05.293 |
0.0 |
Effect of varying threshold for self-training ( |
00:05.141 |
0.0 |
Imputing missing values with variants of IterativeImputer ( |
00:05.027 |
0.0 |
FeatureHasher and DictVectorizer Comparison ( |
00:05.027 |
0.0 |
Permutation Importance vs Random Forest Feature Importance (MDI) ( |
00:04.889 |
0.0 |
RBF SVM parameters ( |
00:04.803 |
0.0 |
Comparison of kernel ridge and Gaussian process regression ( |
00:04.750 |
0.0 |
Permutation Importance with Multicollinear or Correlated Features ( |
00:04.627 |
0.0 |
Comparing randomized search and grid search for hyperparameter estimation ( |
00:04.623 |
0.0 |
Successive Halving Iterations ( |
00:04.431 |
0.0 |
Gaussian process classification (GPC) on iris dataset ( |
00:04.367 |
0.0 |
Release Highlights for scikit-learn 1.4 ( |
00:04.339 |
0.0 |
Multi-class AdaBoosted Decision Trees ( |
00:04.312 |
0.0 |
Kernel Density Estimation ( |
00:04.266 |
0.0 |
Model selection with Probabilistic PCA and Factor Analysis (FA) ( |
00:04.247 |
0.0 |
OOB Errors for Random Forests ( |
00:03.641 |
0.0 |
Online learning of a dictionary of parts of faces ( |
00:03.497 |
0.0 |
Recursive feature elimination ( |
00:03.430 |
0.0 |
Kernel Density Estimate of Species Distributions ( |
00:03.409 |
0.0 |
Early stopping in Gradient Boosting ( |
00:03.389 |
0.0 |
Compare BIRCH and MiniBatchKMeans ( |
00:03.336 |
0.0 |
Feature discretization ( |
00:03.235 |
0.0 |
Comparing anomaly detection algorithms for outlier detection on toy datasets ( |
00:03.089 |
0.0 |
Robust vs Empirical covariance estimate ( |
00:03.030 |
0.0 |
t-SNE: The effect of various perplexity values on the shape ( |
00:02.932 |
0.0 |
Failure of Machine Learning to infer causal effects ( |
00:02.924 |
0.0 |
Feature transformations with ensembles of trees ( |
00:02.864 |
0.0 |
Compare Stochastic learning strategies for MLPClassifier ( |
00:02.744 |
0.0 |
Comparison of Calibration of Classifiers ( |
00:02.646 |
0.0 |
Restricted Boltzmann Machine features for digit classification ( |
00:02.609 |
0.0 |
Advanced Plotting With Partial Dependence ( |
00:02.441 |
0.0 |
Column Transformer with Heterogeneous Data Sources ( |
00:02.434 |
0.0 |
Ledoit-Wolf vs OAS estimation ( |
00:02.400 |
0.0 |
Inductive Clustering ( |
00:02.396 |
0.0 |
Probability Calibration curves ( |
00:02.284 |
0.0 |
Vector Quantization Example ( |
00:02.133 |
0.0 |
Agglomerative clustering with and without structure ( |
00:02.113 |
0.0 |
Classifier comparison ( |
00:02.110 |
0.0 |
Probabilistic predictions with Gaussian process classification (GPC) ( |
00:02.063 |
0.0 |
Robust linear estimator fitting ( |
00:02.045 |
0.0 |
Dimensionality Reduction with Neighborhood Components Analysis ( |
00:01.931 |
0.0 |
Comparing different hierarchical linkage methods on toy datasets ( |
00:01.831 |
0.0 |
Class Likelihood Ratios to measure classification performance ( |
00:01.825 |
0.0 |
Map data to a normal distribution ( |
00:01.771 |
0.0 |
Illustration of prior and posterior Gaussian process for different kernels ( |
00:01.716 |
0.0 |
Varying regularization in Multi-layer Perceptron ( |
00:01.699 |
0.0 |
Explicit feature map approximation for RBF kernels ( |
00:01.694 |
0.0 |
Importance of Feature Scaling ( |
00:01.653 |
0.0 |
Visualizations with Display Objects ( |
00:01.648 |
0.0 |
Face completion with a multi-output estimators ( |
00:01.634 |
0.0 |
Various Agglomerative Clustering on a 2D embedding of digits ( |
00:01.625 |
0.0 |
Probability Calibration for 3-class classification ( |
00:01.573 |
0.0 |
Plot classification probability ( |
00:01.491 |
0.0 |
Demo of OPTICS clustering algorithm ( |
00:01.489 |
0.0 |
Statistical comparison of models using grid search ( |
00:01.452 |
0.0 |
Gaussian Mixture Model Selection ( |
00:01.439 |
0.0 |
Gradient Boosting regression ( |
00:01.439 |
0.0 |
Effect of transforming the targets in regression model ( |
00:01.416 |
0.0 |
Release Highlights for scikit-learn 1.3 ( |
00:01.321 |
0.0 |
Empirical evaluation of the impact of k-means initialization ( |
00:01.294 |
0.0 |
Plot classification boundaries with different SVM Kernels ( |
00:01.266 |
0.0 |
Caching nearest neighbors ( |
00:01.238 |
0.0 |
Column Transformer with Mixed Types ( |
00:01.219 |
0.0 |
Lasso on dense and sparse data ( |
00:01.196 |
0.0 |
Visualizing cross-validation behavior in scikit-learn ( |
00:01.190 |
0.0 |
Pipelining: chaining a PCA and a logistic regression ( |
00:01.156 |
0.0 |
Pixel importances with a parallel forest of trees ( |
00:01.144 |
0.0 |
Demonstration of k-means assumptions ( |
00:01.130 |
0.0 |
Single estimator versus bagging: bias-variance decomposition ( |
00:01.130 |
0.0 |
Selecting the number of clusters with silhouette analysis on KMeans clustering ( |
00:01.120 |
0.0 |
Release Highlights for scikit-learn 0.22 ( |
00:01.113 |
0.0 |
Agglomerative clustering with different metrics ( |
00:01.063 |
0.0 |
Adjustment for chance in clustering performance evaluation ( |
00:01.008 |
0.0 |
Balance model complexity and cross-validated score ( |
00:01.005 |
0.0 |
Plot individual and voting regression predictions ( |
00:00.972 |
0.0 |
Lasso model selection: AIC-BIC / cross-validation ( |
00:00.955 |
0.0 |
Bisecting K-Means and Regular K-Means Performance Comparison ( |
00:00.941 |
0.0 |
Feature importances with a forest of trees ( |
00:00.941 |
0.0 |
SVM Tie Breaking Example ( |
00:00.931 |
0.0 |
Plot the decision surface of decision trees trained on the iris dataset ( |
00:00.893 |
0.0 |
Decision boundary of semi-supervised classifiers versus SVM on the Iris dataset ( |
00:00.887 |
0.0 |
Release Highlights for scikit-learn 1.1 ( |
00:00.880 |
0.0 |
A demo of K-Means clustering on the handwritten digits data ( |
00:00.806 |
0.0 |
Fitting an Elastic Net with a precomputed Gram Matrix and Weighted Samples ( |
00:00.774 |
0.0 |
Comparing Nearest Neighbors with and without Neighborhood Components Analysis ( |
00:00.751 |
0.0 |
Ridge coefficients as a function of the L2 Regularization ( |
00:00.720 |
0.0 |
Release Highlights for scikit-learn 1.5 ( |
00:00.719 |
0.0 |
Kernel PCA ( |
00:00.711 |
0.0 |
Comparing Linear Bayesian Regressors ( |
00:00.671 |
0.0 |
Demonstrating the different strategies of KBinsDiscretizer ( |
00:00.658 |
0.0 |
Two-class AdaBoost ( |
00:00.651 |
0.0 |
Plot the decision boundaries of a VotingClassifier ( |
00:00.645 |
0.0 |
Multiclass Receiver Operating Characteristic (ROC) ( |
00:00.644 |
0.0 |
Novelty detection with Local Outlier Factor (LOF) ( |
00:00.629 |
0.0 |
Theil-Sen Regression ( |
00:00.625 |
0.0 |
Plotting Validation Curves ( |
00:00.619 |
0.0 |
GMM Initialization Methods ( |
00:00.605 |
0.0 |
Release Highlights for scikit-learn 0.23 ( |
00:00.592 |
0.0 |
Quantile regression ( |
00:00.592 |
0.0 |
Sparse inverse covariance estimation ( |
00:00.583 |
0.0 |
Monotonic Constraints ( |
00:00.579 |
0.0 |
Simple 1D Kernel Density Estimation ( |
00:00.561 |
0.0 |
Lasso and Elastic Net ( |
00:00.557 |
0.0 |
Cross-validation on diabetes Dataset Exercise ( |
00:00.532 |
0.0 |
Comparing random forests and the multi-output meta estimator ( |
00:00.531 |
0.0 |
L1 Penalty and Sparsity in Logistic Regression ( |
00:00.531 |
0.0 |
Spectral clustering for image segmentation ( |
00:00.530 |
0.0 |
Principal Component Regression vs Partial Least Squares Regression ( |
00:00.529 |
0.0 |
Nearest Neighbors Classification ( |
00:00.528 |
0.0 |
Feature agglomeration vs. univariate selection ( |
00:00.517 |
0.0 |
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood ( |
00:00.515 |
0.0 |
Gaussian Processes regression: basic introductory example ( |
00:00.511 |
0.0 |
Recursive feature elimination with cross-validation ( |
00:00.510 |
0.0 |
Gaussian Mixture Model Sine Curve ( |
00:00.508 |
0.0 |
Polynomial and Spline interpolation ( |
00:00.502 |
0.0 |
Illustration of Gaussian process classification (GPC) on the XOR dataset ( |
00:00.498 |
0.0 |
Recognizing hand-written digits ( |
00:00.497 |
0.0 |
Post pruning decision trees with cost complexity pruning ( |
00:00.488 |
0.0 |
A demo of the Spectral Biclustering algorithm ( |
00:00.482 |
0.0 |
SVM: Weighted samples ( |
00:00.474 |
0.0 |
Color Quantization using K-Means ( |
00:00.467 |
0.0 |
One-Class SVM versus One-Class SVM using Stochastic Gradient Descent ( |
00:00.465 |
0.0 |
A demo of structured Ward hierarchical clustering on an image of coins ( |
00:00.462 |
0.0 |
Label Propagation digits active learning ( |
00:00.460 |
0.0 |
Decision Tree Regression with AdaBoost ( |
00:00.455 |
0.0 |
L1-based models for Sparse Signals ( |
00:00.446 |
0.0 |
Factor Analysis (with rotation) to visualize patterns ( |
00:00.444 |
0.0 |
A demo of the mean-shift clustering algorithm ( |
00:00.438 |
0.0 |
IsolationForest example ( |
00:00.436 |
0.0 |
Concatenating multiple feature extraction methods ( |
00:00.421 |
0.0 |
FastICA on 2D point clouds ( |
00:00.404 |
0.0 |
Support Vector Regression (SVR) using linear and non-linear kernels ( |
00:00.396 |
0.0 |
Blind source separation using FastICA ( |
00:00.396 |
0.0 |
Linear and Quadratic Discriminant Analysis with covariance ellipsoid ( |
00:00.396 |
0.0 |
Outlier detection on a real data set ( |
00:00.383 |
0.0 |
Precision-Recall ( |
00:00.377 |
0.0 |
Hierarchical clustering: structured vs unstructured ward ( |
00:00.370 |
0.0 |
Hashing feature transformation using Totally Random Trees ( |
00:00.350 |
0.0 |
Sparse coding with a precomputed dictionary ( |
00:00.349 |
0.0 |
Plot randomly generated classification dataset ( |
00:00.345 |
0.0 |
Plot Ridge coefficients as a function of the regularization ( |
00:00.341 |
0.0 |
Demo of affinity propagation clustering algorithm ( |
00:00.338 |
0.0 |
A demo of the Spectral Co-Clustering algorithm ( |
00:00.334 |
0.0 |
Probability calibration of classifiers ( |
00:00.333 |
0.0 |
Plot class probabilities calculated by the VotingClassifier ( |
00:00.323 |
0.0 |
K-means Clustering ( |
00:00.320 |
0.0 |
SGD: Penalties ( |
00:00.309 |
0.0 |
Label Propagation digits: Demonstrating performance ( |
00:00.306 |
0.0 |
SVM-Anova: SVM with univariate feature selection ( |
00:00.300 |
0.0 |
Incremental PCA ( |
00:00.293 |
0.0 |
Robust covariance estimation and Mahalanobis distances relevance ( |
00:00.286 |
0.0 |
Target Encoder’s Internal Cross fitting ( |
00:00.279 |
0.0 |
Curve Fitting with Bayesian Ridge Regression ( |
00:00.279 |
0.0 |
Joint feature selection with multi-task Lasso ( |
00:00.273 |
0.0 |
Ordinary Least Squares and Ridge Regression Variance ( |
00:00.272 |
0.0 |
Compare cross decomposition methods ( |
00:00.241 |
0.0 |
Multi-output Decision Tree Regression ( |
00:00.241 |
0.0 |
Comparison of LDA and PCA 2D projection of Iris dataset ( |
00:00.232 |
0.0 |
Nearest Neighbors regression ( |
00:00.226 |
0.0 |
Gaussian processes on discrete data structures ( |
00:00.222 |
0.0 |
Comparison of F-test and mutual information ( |
00:00.222 |
0.0 |
Univariate Feature Selection ( |
00:00.213 |
0.0 |
Gaussian Mixture Model Ellipsoids ( |
00:00.209 |
0.0 |
Using KBinsDiscretizer to discretize continuous features ( |
00:00.206 |
0.0 |
Orthogonal Matching Pursuit ( |
00:00.203 |
0.0 |
GMM covariances ( |
00:00.198 |
0.0 |
Sparsity Example: Fitting only features 1 and 2 ( |
00:00.194 |
0.0 |
Plot different SVM classifiers in the iris dataset ( |
00:00.193 |
0.0 |
Plot multinomial and One-vs-Rest Logistic Regression ( |
00:00.192 |
0.0 |
The Iris Dataset ( |
00:00.191 |
0.0 |
Underfitting vs. Overfitting ( |
00:00.190 |
0.0 |
Plot the support vectors in LinearSVC ( |
00:00.183 |
0.0 |
Plotting Cross-Validated Predictions ( |
00:00.180 |
0.0 |
Detection error tradeoff (DET) curve ( |
00:00.180 |
0.0 |
Receiver Operating Characteristic (ROC) with cross validation ( |
00:00.177 |
0.0 |
Multilabel classification ( |
00:00.176 |
0.0 |
Confusion matrix ( |
00:00.173 |
0.0 |
Multi-dimensional scaling ( |
00:00.171 |
0.0 |
Comparison of the K-Means and MiniBatchKMeans clustering algorithms ( |
00:00.170 |
0.0 |
Demo of DBSCAN clustering algorithm ( |
00:00.165 |
0.0 |
SVM: Separating hyperplane for unbalanced classes ( |
00:00.160 |
0.0 |
Density Estimation for a Gaussian mixture ( |
00:00.160 |
0.0 |
Nearest Centroid Classification ( |
00:00.156 |
0.0 |
Feature agglomeration ( |
00:00.153 |
0.0 |
Neighborhood Components Analysis Illustration ( |
00:00.150 |
0.0 |
Iso-probability lines for Gaussian Processes classification (GPC) ( |
00:00.146 |
0.0 |
ROC Curve with Visualization API ( |
00:00.146 |
0.0 |
Label Propagation learning a complex structure ( |
00:00.144 |
0.0 |
One-class SVM with non-linear kernel (RBF) ( |
00:00.138 |
0.0 |
Isotonic Regression ( |
00:00.137 |
0.0 |
Introducing the set_output API ( |
00:00.135 |
0.0 |
Logistic function ( |
00:00.134 |
0.0 |
Plot Hierarchical Clustering Dendrogram ( |
00:00.128 |
0.0 |
Plot randomly generated multilabel dataset ( |
00:00.124 |
0.0 |
SGD: convex loss functions ( |
00:00.122 |
0.0 |
Regularization path of L1- Logistic Regression ( |
00:00.121 |
0.0 |
Plot multi-class SGD on the iris dataset ( |
00:00.114 |
0.0 |
Lasso model selection via information criteria ( |
00:00.111 |
0.0 |
HuberRegressor vs Ridge on dataset with strong outliers ( |
00:00.108 |
0.0 |
PCA example with Iris Data-set ( |
00:00.107 |
0.0 |
Robust linear model estimation using RANSAC ( |
00:00.101 |
0.0 |
SVM with custom kernel ( |
00:00.093 |
0.0 |
Displaying Pipelines ( |
00:00.091 |
0.0 |
Decision Tree Regression ( |
00:00.091 |
0.0 |
SGD: Maximum margin separating hyperplane ( |
00:00.085 |
0.0 |
Understanding the decision tree structure ( |
00:00.084 |
0.0 |
Lasso path using LARS ( |
00:00.082 |
0.0 |
Outlier detection with Local Outlier Factor (LOF) ( |
00:00.079 |
0.0 |
SGD: Weighted samples ( |
00:00.079 |
0.0 |
Digits Classification Exercise ( |
00:00.067 |
0.0 |
SVM: Maximum margin separating hyperplane ( |
00:00.066 |
0.0 |
Non-negative least squares ( |
00:00.064 |
0.0 |
SVM Margins Example ( |
00:00.063 |
0.0 |
An example of K-Means++ initialization ( |
00:00.061 |
0.0 |
The Digit Dataset ( |
00:00.056 |
0.0 |
Logistic Regression 3-class Classifier ( |
00:00.046 |
0.0 |
Metadata Routing ( |
00:00.040 |
0.0 |
Linear Regression Example ( |
00:00.036 |
0.0 |
Displaying estimators and complex pipelines ( |
00:00.026 |
0.0 |
Pipeline ANOVA SVM ( |
00:00.015 |
0.0 |
Release Highlights for scikit-learn 1.0 ( |
00:00.015 |
0.0 |
Wikipedia principal eigenvector ( |
00:00.000 |
0.0 |
__sklearn_is_fitted__ as Developer API ( |
00:00.000 |
0.0 |
Approximate nearest neighbors in TSNE ( |
00:00.000 |
0.0 |