Computation times#
19:36.220 total execution time for 282 files from all galleries:
Example |
Time |
Mem (MB) |
---|---|---|
Comparing Random Forests and Histogram Gradient Boosting models ( |
00:58.358 |
0.0 |
Evaluation of outlier detection estimators ( |
00:46.449 |
0.0 |
Selecting dimensionality reduction with Pipeline and GridSearchCV ( |
00:44.280 |
0.0 |
Model-based and sequential feature selection ( |
00:31.115 |
0.0 |
Post-hoc tuning the cut-off point of decision function ( |
00:30.988 |
0.0 |
Sample pipeline for text feature extraction and evaluation ( |
00:30.580 |
0.0 |
Plotting Learning Curves and Checking Models’ Scalability ( |
00:28.600 |
0.0 |
Image denoising using dictionary learning ( |
00:26.395 |
0.0 |
Post-tuning the decision threshold for cost-sensitive learning ( |
00:26.243 |
0.0 |
Combine predictors using stacking ( |
00:24.967 |
0.0 |
Comparing Target Encoder with Other Encoders ( |
00:22.986 |
0.0 |
Overview of multiclass training meta-estimators ( |
00:22.598 |
0.0 |
Early stopping of Stochastic Gradient Descent ( |
00:21.775 |
0.0 |
Partial Dependence and Individual Conditional Expectation Plots ( |
00:21.266 |
0.0 |
Features in Histogram Gradient Boosting Trees ( |
00:19.701 |
0.0 |
Manifold learning on handwritten digits: Locally Linear Embedding, Isomap… ( |
00:19.357 |
0.0 |
Scalable learning with polynomial kernel approximation ( |
00:18.938 |
0.0 |
Poisson regression and non-normal loss ( |
00:18.646 |
0.0 |
Prediction Latency ( |
00:16.940 |
0.0 |
Scaling the regularization parameter for SVCs ( |
00:16.784 |
0.0 |
Swiss Roll And Swiss-Hole Reduction ( |
00:16.744 |
0.0 |
Comparison of Manifold Learning methods ( |
00:16.234 |
0.0 |
Test with permutations the significance of a classification score ( |
00:13.678 |
0.0 |
Demo of HDBSCAN clustering algorithm ( |
00:13.639 |
0.0 |
Release Highlights for scikit-learn 0.24 ( |
00:12.894 |
0.0 |
Common pitfalls in the interpretation of coefficients of linear models ( |
00:12.364 |
0.0 |
Time-related feature engineering ( |
00:12.039 |
0.0 |
The Johnson-Lindenstrauss bound for embedding with random projections ( |
00:11.163 |
0.0 |
Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation ( |
00:10.490 |
0.0 |
Custom refit strategy of a grid search with cross-validation ( |
00:09.787 |
0.0 |
Imputing missing values before building an estimator ( |
00:09.595 |
0.0 |
Compressive sensing: tomography reconstruction with L1 prior (Lasso) ( |
00:09.542 |
0.0 |
Prediction Intervals for Gradient Boosting Regression ( |
00:09.207 |
0.0 |
Gradient Boosting Out-of-Bag estimates ( |
00:09.097 |
0.0 |
Comparison of kernel ridge regression and SVR ( |
00:08.866 |
0.0 |
Lagged features for time series forecasting ( |
00:08.615 |
0.0 |
Gradient Boosting regularization ( |
00:08.286 |
0.0 |
Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV ( |
00:08.283 |
0.0 |
Out-of-core classification of text documents ( |
00:08.120 |
0.0 |
Compare the effect of different scalers on data with outliers ( |
00:08.032 |
0.0 |
Comparing various online solvers ( |
00:07.976 |
0.0 |
Visualizing the stock market structure ( |
00:07.933 |
0.0 |
Faces dataset decompositions ( |
00:07.801 |
0.0 |
Normal, Ledoit-Wolf and OAS Linear Discriminant Analysis for classification ( |
00:07.741 |
0.0 |
Image denoising using kernel PCA ( |
00:07.722 |
0.0 |
MNIST classification using multinomial logistic + L1 ( |
00:07.662 |
0.0 |
Visualization of MLP weights on MNIST ( |
00:07.445 |
0.0 |
Comparison between grid search and successive halving ( |
00:07.435 |
0.0 |
Manifold Learning methods on a severed sphere ( |
00:07.406 |
0.0 |
Tweedie regression on insurance claims ( |
00:07.239 |
0.0 |
Clustering text documents using k-means ( |
00:07.026 |
0.0 |
Nested versus non-nested cross-validation ( |
00:06.560 |
0.0 |
Semi-supervised Classification on a Text Dataset ( |
00:06.549 |
0.0 |
Classification of text documents using sparse features ( |
00:06.410 |
0.0 |
Plot the decision surfaces of ensembles of trees on the iris dataset ( |
00:06.222 |
0.0 |
Comparing different clustering algorithms on toy datasets ( |
00:06.193 |
0.0 |
Imputing missing values with variants of IterativeImputer ( |
00:06.111 |
0.0 |
Species distribution modeling ( |
00:06.089 |
0.0 |
Faces recognition example using eigenfaces and SVMs ( |
00:06.078 |
0.0 |
Biclustering documents with the Spectral Co-clustering algorithm ( |
00:05.779 |
0.0 |
Segmenting the picture of greek coins in regions ( |
00:05.575 |
0.0 |
Ability of Gaussian process regression (GPR) to estimate data noise-level ( |
00:05.508 |
0.0 |
Multiclass sparse logistic regression on 20newgroups ( |
00:05.332 |
0.0 |
Effect of model regularization on training and test error ( |
00:05.283 |
0.0 |
Concentration Prior Type Analysis of Variation Bayesian Gaussian Mixture ( |
00:05.281 |
0.0 |
SVM Exercise ( |
00:05.280 |
0.0 |
Effect of varying threshold for self-training ( |
00:05.249 |
0.0 |
Successive Halving Iterations ( |
00:05.215 |
0.0 |
FeatureHasher and DictVectorizer Comparison ( |
00:04.933 |
0.0 |
RBF SVM parameters ( |
00:04.913 |
0.0 |
Release Highlights for scikit-learn 1.2 ( |
00:04.907 |
0.0 |
Model Complexity Influence ( |
00:04.733 |
0.0 |
Kernel Density Estimation ( |
00:04.662 |
0.0 |
Comparing randomized search and grid search for hyperparameter estimation ( |
00:04.526 |
0.0 |
Comparison of kernel ridge and Gaussian process regression ( |
00:04.475 |
0.0 |
Multi-class AdaBoosted Decision Trees ( |
00:04.389 |
0.0 |
Forecasting of CO2 level on Mona Loa dataset using Gaussian process regression (GPR) ( |
00:04.166 |
0.0 |
Permutation Importance with Multicollinear or Correlated Features ( |
00:04.142 |
0.0 |
Permutation Importance vs Random Forest Feature Importance (MDI) ( |
00:04.033 |
0.0 |
Compare BIRCH and MiniBatchKMeans ( |
00:03.861 |
0.0 |
Early stopping in Gradient Boosting ( |
00:03.702 |
0.0 |
OOB Errors for Random Forests ( |
00:03.700 |
0.0 |
Kernel Density Estimate of Species Distributions ( |
00:03.699 |
0.0 |
Categorical Feature Support in Gradient Boosting ( |
00:03.584 |
0.0 |
Feature discretization ( |
00:03.487 |
0.0 |
Model selection with Probabilistic PCA and Factor Analysis (FA) ( |
00:03.231 |
0.0 |
Comparing anomaly detection algorithms for outlier detection on toy datasets ( |
00:03.214 |
0.0 |
Compare Stochastic learning strategies for MLPClassifier ( |
00:03.111 |
0.0 |
t-SNE: The effect of various perplexity values on the shape ( |
00:03.068 |
0.0 |
Restricted Boltzmann Machine features for digit classification ( |
00:03.055 |
0.0 |
Principal Component Analysis (PCA) on Iris Dataset ( |
00:03.044 |
0.0 |
Gaussian process classification (GPC) on iris dataset ( |
00:02.985 |
0.0 |
Robust vs Empirical covariance estimate ( |
00:02.942 |
0.0 |
Recursive feature elimination ( |
00:02.864 |
0.0 |
Feature transformations with ensembles of trees ( |
00:02.817 |
0.0 |
Comparison of Calibration of Classifiers ( |
00:02.740 |
0.0 |
Column Transformer with Heterogeneous Data Sources ( |
00:02.632 |
0.0 |
Advanced Plotting With Partial Dependence ( |
00:02.472 |
0.0 |
Multilabel classification using a classifier chain ( |
00:02.426 |
0.0 |
Ledoit-Wolf vs OAS estimation ( |
00:02.411 |
0.0 |
Failure of Machine Learning to infer causal effects ( |
00:02.406 |
0.0 |
Online learning of a dictionary of parts of faces ( |
00:02.266 |
0.0 |
Probability Calibration curves ( |
00:02.168 |
0.0 |
Dimensionality Reduction with Neighborhood Components Analysis ( |
00:02.159 |
0.0 |
Release Highlights for scikit-learn 1.4 ( |
00:02.158 |
0.0 |
Classifier comparison ( |
00:02.150 |
0.0 |
Probabilistic predictions with Gaussian process classification (GPC) ( |
00:02.101 |
0.0 |
Vector Quantization Example ( |
00:02.076 |
0.0 |
Varying regularization in Multi-layer Perceptron ( |
00:01.993 |
0.0 |
Map data to a normal distribution ( |
00:01.990 |
0.0 |
Class Likelihood Ratios to measure classification performance ( |
00:01.985 |
0.0 |
Inductive Clustering ( |
00:01.934 |
0.0 |
Agglomerative clustering with and without structure ( |
00:01.852 |
0.0 |
Comparing different hierarchical linkage methods on toy datasets ( |
00:01.821 |
0.0 |
Statistical comparison of models using grid search ( |
00:01.731 |
0.0 |
Importance of Feature Scaling ( |
00:01.720 |
0.0 |
Robust linear estimator fitting ( |
00:01.713 |
0.0 |
Face completion with a multi-output estimators ( |
00:01.647 |
0.0 |
Explicit feature map approximation for RBF kernels ( |
00:01.600 |
0.0 |
Demo of OPTICS clustering algorithm ( |
00:01.560 |
0.0 |
Effect of transforming the targets in regression model ( |
00:01.548 |
0.0 |
Caching nearest neighbors ( |
00:01.510 |
0.0 |
Probability Calibration for 3-class classification ( |
00:01.509 |
0.0 |
Illustration of prior and posterior Gaussian process for different kernels ( |
00:01.507 |
0.0 |
Gradient Boosting regression ( |
00:01.459 |
0.0 |
Various Agglomerative Clustering on a 2D embedding of digits ( |
00:01.436 |
0.0 |
Plot classification probability ( |
00:01.360 |
0.0 |
Visualizing cross-validation behavior in scikit-learn ( |
00:01.319 |
0.0 |
Column Transformer with Mixed Types ( |
00:01.304 |
0.0 |
Release Highlights for scikit-learn 1.3 ( |
00:01.295 |
0.0 |
Plot classification boundaries with different SVM Kernels ( |
00:01.262 |
0.0 |
Empirical evaluation of the impact of k-means initialization ( |
00:01.255 |
0.0 |
Release Highlights for scikit-learn 0.22 ( |
00:01.226 |
0.0 |
Balance model complexity and cross-validated score ( |
00:01.210 |
0.0 |
Gaussian Mixture Model Selection ( |
00:01.173 |
0.0 |
Lasso on dense and sparse data ( |
00:01.164 |
0.0 |
Pipelining: chaining a PCA and a logistic regression ( |
00:01.159 |
0.0 |
Demonstration of k-means assumptions ( |
00:01.127 |
0.0 |
Single estimator versus bagging: bias-variance decomposition ( |
00:01.111 |
0.0 |
Selecting the number of clusters with silhouette analysis on KMeans clustering ( |
00:01.058 |
0.0 |
Feature importances with a forest of trees ( |
00:01.057 |
0.0 |
Agglomerative clustering with different metrics ( |
00:01.035 |
0.0 |
Plot individual and voting regression predictions ( |
00:00.966 |
0.0 |
Adjustment for chance in clustering performance evaluation ( |
00:00.964 |
0.0 |
Release Highlights for scikit-learn 1.1 ( |
00:00.957 |
0.0 |
Comparing Nearest Neighbors with and without Neighborhood Components Analysis ( |
00:00.951 |
0.0 |
Bisecting K-Means and Regular K-Means Performance Comparison ( |
00:00.937 |
0.0 |
SVM Tie Breaking Example ( |
00:00.903 |
0.0 |
Lasso model selection: AIC-BIC / cross-validation ( |
00:00.880 |
0.0 |
Decision boundary of semi-supervised classifiers versus SVM on the Iris dataset ( |
00:00.878 |
0.0 |
Plot the decision surface of decision trees trained on the iris dataset ( |
00:00.850 |
0.0 |
Release Highlights for scikit-learn 1.5 ( |
00:00.802 |
0.0 |
Fitting an Elastic Net with a precomputed Gram Matrix and Weighted Samples ( |
00:00.788 |
0.0 |
Lasso, Lasso-LARS, and Elastic Net paths ( |
00:00.783 |
0.0 |
A demo of K-Means clustering on the handwritten digits data ( |
00:00.750 |
0.0 |
Novelty detection with Local Outlier Factor (LOF) ( |
00:00.727 |
0.0 |
Multiclass Receiver Operating Characteristic (ROC) ( |
00:00.667 |
0.0 |
Two-class AdaBoost ( |
00:00.655 |
0.0 |
Ridge coefficients as a function of the L2 Regularization ( |
00:00.653 |
0.0 |
Plot the decision boundaries of a VotingClassifier ( |
00:00.635 |
0.0 |
Demonstrating the different strategies of KBinsDiscretizer ( |
00:00.633 |
0.0 |
Decision Boundaries of Multinomial and One-vs-Rest Logistic Regression ( |
00:00.612 |
0.0 |
Simple 1D Kernel Density Estimation ( |
00:00.609 |
0.0 |
Comparing Linear Bayesian Regressors ( |
00:00.607 |
0.0 |
Kernel PCA ( |
00:00.599 |
0.0 |
Nearest Neighbors Classification ( |
00:00.584 |
0.0 |
Comparing random forests and the multi-output meta estimator ( |
00:00.557 |
0.0 |
GMM Initialization Methods ( |
00:00.541 |
0.0 |
Release Highlights for scikit-learn 0.23 ( |
00:00.538 |
0.0 |
Cross-validation on diabetes Dataset Exercise ( |
00:00.536 |
0.0 |
Theil-Sen Regression ( |
00:00.532 |
0.0 |
Principal Component Regression vs Partial Least Squares Regression ( |
00:00.528 |
0.0 |
Monotonic Constraints ( |
00:00.518 |
0.0 |
A demo of the Spectral Biclustering algorithm ( |
00:00.516 |
0.0 |
Quantile regression ( |
00:00.503 |
0.0 |
Decision Tree Regression with AdaBoost ( |
00:00.502 |
0.0 |
Label Propagation digits active learning ( |
00:00.491 |
0.0 |
Concatenating multiple feature extraction methods ( |
00:00.485 |
0.0 |
IsolationForest example ( |
00:00.474 |
0.0 |
SVM: Weighted samples ( |
00:00.474 |
0.0 |
Sparse inverse covariance estimation ( |
00:00.470 |
0.0 |
Gaussian Processes regression: basic introductory example ( |
00:00.463 |
0.0 |
Spectral clustering for image segmentation ( |
00:00.456 |
0.0 |
L1 Penalty and Sparsity in Logistic Regression ( |
00:00.455 |
0.0 |
Feature agglomeration vs. univariate selection ( |
00:00.453 |
0.0 |
Linear and Quadratic Discriminant Analysis with covariance ellipsoid ( |
00:00.447 |
0.0 |
Recursive feature elimination with cross-validation ( |
00:00.442 |
0.0 |
Illustration of Gaussian process classification (GPC) on the XOR dataset ( |
00:00.441 |
0.0 |
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood ( |
00:00.434 |
0.0 |
Post pruning decision trees with cost complexity pruning ( |
00:00.432 |
0.0 |
Recognizing hand-written digits ( |
00:00.420 |
0.0 |
Factor Analysis (with rotation) to visualize patterns ( |
00:00.420 |
0.0 |
Polynomial and Spline interpolation ( |
00:00.408 |
0.0 |
Gaussian Mixture Model Sine Curve ( |
00:00.408 |
0.0 |
L1-based models for Sparse Signals ( |
00:00.405 |
0.0 |
A demo of the mean-shift clustering algorithm ( |
00:00.401 |
0.0 |
Support Vector Regression (SVR) using linear and non-linear kernels ( |
00:00.397 |
0.0 |
Precision-Recall ( |
00:00.383 |
0.0 |
One-Class SVM versus One-Class SVM using Stochastic Gradient Descent ( |
00:00.378 |
0.0 |
FastICA on 2D point clouds ( |
00:00.376 |
0.0 |
Hashing feature transformation using Totally Random Trees ( |
00:00.359 |
0.0 |
Outlier detection on a real data set ( |
00:00.359 |
0.0 |
Blind source separation using FastICA ( |
00:00.353 |
0.0 |
Hierarchical clustering: structured vs unstructured ward ( |
00:00.341 |
0.0 |
Plot Ridge coefficients as a function of the regularization ( |
00:00.338 |
0.0 |
Visualizations with Display Objects ( |
00:00.336 |
0.0 |
A demo of structured Ward hierarchical clustering on an image of coins ( |
00:00.331 |
0.0 |
Probability calibration of classifiers ( |
00:00.330 |
0.0 |
Plot class probabilities calculated by the VotingClassifier ( |
00:00.327 |
0.0 |
Demo of affinity propagation clustering algorithm ( |
00:00.323 |
0.0 |
Label Propagation digits: Demonstrating performance ( |
00:00.322 |
0.0 |
A demo of the Spectral Co-Clustering algorithm ( |
00:00.321 |
0.0 |
SVM-Anova: SVM with univariate feature selection ( |
00:00.318 |
0.0 |
Decision Tree Regression ( |
00:00.307 |
0.0 |
Curve Fitting with Bayesian Ridge Regression ( |
00:00.292 |
0.0 |
Robust covariance estimation and Mahalanobis distances relevance ( |
00:00.291 |
0.0 |
Target Encoder’s Internal Cross fitting ( |
00:00.278 |
0.0 |
Sparse coding with a precomputed dictionary ( |
00:00.267 |
0.0 |
Nearest Neighbors regression ( |
00:00.265 |
0.0 |
SGD: Penalties ( |
00:00.260 |
0.0 |
Ordinary Least Squares and Ridge Regression Variance ( |
00:00.259 |
0.0 |
Joint feature selection with multi-task Lasso ( |
00:00.234 |
0.0 |
Incremental PCA ( |
00:00.227 |
0.0 |
Underfitting vs. Overfitting ( |
00:00.216 |
0.0 |
Comparison of F-test and mutual information ( |
00:00.213 |
0.0 |
Using KBinsDiscretizer to discretize continuous features ( |
00:00.212 |
0.0 |
Gaussian processes on discrete data structures ( |
00:00.209 |
0.0 |
Detection error tradeoff (DET) curve ( |
00:00.200 |
0.0 |
Compare cross decomposition methods ( |
00:00.198 |
0.0 |
Plot different SVM classifiers in the iris dataset ( |
00:00.195 |
0.0 |
Gaussian Mixture Model Ellipsoids ( |
00:00.188 |
0.0 |
Comparison of LDA and PCA 2D projection of Iris dataset ( |
00:00.188 |
0.0 |
Orthogonal Matching Pursuit ( |
00:00.187 |
0.0 |
GMM covariances ( |
00:00.186 |
0.0 |
Receiver Operating Characteristic (ROC) with cross validation ( |
00:00.183 |
0.0 |
Plotting Cross-Validated Predictions ( |
00:00.183 |
0.0 |
Plot the support vectors in LinearSVC ( |
00:00.180 |
0.0 |
Multi-dimensional scaling ( |
00:00.179 |
0.0 |
Univariate Feature Selection ( |
00:00.177 |
0.0 |
Multilabel classification ( |
00:00.177 |
0.0 |
Comparison of the K-Means and MiniBatchKMeans clustering algorithms ( |
00:00.174 |
0.0 |
Nearest Centroid Classification ( |
00:00.172 |
0.0 |
Confusion matrix ( |
00:00.171 |
0.0 |
Demo of DBSCAN clustering algorithm ( |
00:00.163 |
0.0 |
Neighborhood Components Analysis Illustration ( |
00:00.162 |
0.0 |
SVM: Separating hyperplane for unbalanced classes ( |
00:00.159 |
0.0 |
Label Propagation learning a complex structure ( |
00:00.148 |
0.0 |
ROC Curve with Visualization API ( |
00:00.147 |
0.0 |
Ordinary Least Squares Example ( |
00:00.146 |
0.0 |
Introducing the set_output API ( |
00:00.137 |
0.0 |
Isotonic Regression ( |
00:00.136 |
0.0 |
Plot randomly generated multilabel dataset ( |
00:00.133 |
0.0 |
Feature agglomeration ( |
00:00.131 |
0.0 |
One-class SVM with non-linear kernel (RBF) ( |
00:00.131 |
0.0 |
Iso-probability lines for Gaussian Processes classification (GPC) ( |
00:00.131 |
0.0 |
Density Estimation for a Gaussian mixture ( |
00:00.115 |
0.0 |
Logistic function ( |
00:00.108 |
0.0 |
Plot multi-class SGD on the iris dataset ( |
00:00.108 |
0.0 |
Regularization path of L1- Logistic Regression ( |
00:00.105 |
0.0 |
HuberRegressor vs Ridge on dataset with strong outliers ( |
00:00.099 |
0.0 |
Displaying Pipelines ( |
00:00.098 |
0.0 |
Plot Hierarchical Clustering Dendrogram ( |
00:00.096 |
0.0 |
Lasso model selection via information criteria ( |
00:00.092 |
0.0 |
SGD: convex loss functions ( |
00:00.092 |
0.0 |
SVM with custom kernel ( |
00:00.089 |
0.0 |
Robust linear model estimation using RANSAC ( |
00:00.089 |
0.0 |
Understanding the decision tree structure ( |
00:00.084 |
0.0 |
Outlier detection with Local Outlier Factor (LOF) ( |
00:00.081 |
0.0 |
SGD: Weighted samples ( |
00:00.071 |
0.0 |
SGD: Maximum margin separating hyperplane ( |
00:00.068 |
0.0 |
Digits Classification Exercise ( |
00:00.067 |
0.0 |
SVM Margins Example ( |
00:00.065 |
0.0 |
SVM: Maximum margin separating hyperplane ( |
00:00.065 |
0.0 |
Non-negative least squares ( |
00:00.063 |
0.0 |
An example of K-Means++ initialization ( |
00:00.061 |
0.0 |
Metadata Routing ( |
00:00.040 |
0.0 |
Displaying estimators and complex pipelines ( |
00:00.027 |
0.0 |
Release Highlights for scikit-learn 1.0 ( |
00:00.017 |
0.0 |
Pipeline ANOVA SVM ( |
00:00.012 |
0.0 |
Wikipedia principal eigenvector ( |
00:00.000 |
0.0 |
__sklearn_is_fitted__ as Developer API ( |
00:00.000 |
0.0 |
Approximate nearest neighbors in TSNE ( |
00:00.000 |
0.0 |