Examples¶
Release Highlights¶
These examples illustrate the main features of the releases of scikit-learn.
Release Highlights for scikit-learn 1.3
Release Highlights for scikit-learn 1.2
Release Highlights for scikit-learn 1.1
Release Highlights for scikit-learn 1.0
Release Highlights for scikit-learn 0.24
Release Highlights for scikit-learn 0.23
Release Highlights for scikit-learn 0.22
Biclustering¶
Examples concerning biclustering techniques.
A demo of the Spectral Biclustering algorithm
A demo of the Spectral Co-Clustering algorithm
Biclustering documents with the Spectral Co-clustering algorithm
Calibration¶
Examples illustrating the calibration of predicted probabilities of classifiers.
Comparison of Calibration of Classifiers
Probability Calibration curves
Probability Calibration for 3-class classification
Probability calibration of classifiers
Classification¶
General examples about classification algorithms.
Linear and Quadratic Discriminant Analysis with covariance ellipsoid
Normal, Ledoit-Wolf and OAS Linear Discriminant Analysis for classification
Plot classification probability
Recognizing hand-written digits
Clustering¶
Examples concerning the sklearn.cluster
module.
A demo of K-Means clustering on the handwritten digits data
A demo of structured Ward hierarchical clustering on an image of coins
A demo of the mean-shift clustering algorithm
Adjustment for chance in clustering performance evaluation
Agglomerative clustering with and without structure
Agglomerative clustering with different metrics
An example of K-Means++ initialization
Bisecting K-Means and Regular K-Means Performance Comparison
Color Quantization using K-Means
Compare BIRCH and MiniBatchKMeans
Comparing different clustering algorithms on toy datasets
Comparing different hierarchical linkage methods on toy datasets
Comparison of the K-Means and MiniBatchKMeans clustering algorithms
Demo of DBSCAN clustering algorithm
Demo of HDBSCAN clustering algorithm
Demo of OPTICS clustering algorithm
Demo of affinity propagation clustering algorithm
Demonstration of k-means assumptions
Empirical evaluation of the impact of k-means initialization
Feature agglomeration vs. univariate selection
Hierarchical clustering: structured vs unstructured ward
Online learning of a dictionary of parts of faces
Plot Hierarchical Clustering Dendrogram
Segmenting the picture of greek coins in regions
Selecting the number of clusters with silhouette analysis on KMeans clustering
Spectral clustering for image segmentation
Various Agglomerative Clustering on a 2D embedding of digits
Covariance estimation¶
Examples concerning the sklearn.covariance
module.
Robust covariance estimation and Mahalanobis distances relevance
Robust vs Empirical covariance estimate
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood
Sparse inverse covariance estimation
Cross decomposition¶
Examples concerning the sklearn.cross_decomposition
module.
Compare cross decomposition methods
Principal Component Regression vs Partial Least Squares Regression
Dataset examples¶
Examples concerning the sklearn.datasets
module.
Plot randomly generated classification dataset
Plot randomly generated multilabel dataset
Decision Trees¶
Examples concerning the sklearn.tree
module.
Multi-output Decision Tree Regression
Plot the decision surface of decision trees trained on the iris dataset
Post pruning decision trees with cost complexity pruning
Understanding the decision tree structure
Decomposition¶
Examples concerning the sklearn.decomposition
module.
Blind source separation using FastICA
Comparison of LDA and PCA 2D projection of Iris dataset
Factor Analysis (with rotation) to visualize patterns
Image denoising using dictionary learning
Model selection with Probabilistic PCA and Factor Analysis (FA)
PCA example with Iris Data-set
Principal components analysis (PCA)
Sparse coding with a precomputed dictionary
Developing Estimators¶
Examples concerning the development of Custom Estimator.
__sklearn_is_fitted__ as Developer API
Ensemble methods¶
Examples concerning the sklearn.ensemble
module.
Categorical Feature Support in Gradient Boosting
Combine predictors using stacking
Comparing Random Forests and Histogram Gradient Boosting models
Comparing random forests and the multi-output meta estimator
Decision Tree Regression with AdaBoost
Early stopping of Gradient Boosting
Feature importances with a forest of trees
Feature transformations with ensembles of trees
Gradient Boosting Out-of-Bag estimates
Gradient Boosting regularization
Hashing feature transformation using Totally Random Trees
Multi-class AdaBoosted Decision Trees
Pixel importances with a parallel forest of trees
Plot class probabilities calculated by the VotingClassifier
Plot individual and voting regression predictions
Plot the decision boundaries of a VotingClassifier
Plot the decision surfaces of ensembles of trees on the iris dataset
Prediction Intervals for Gradient Boosting Regression
Single estimator versus bagging: bias-variance decomposition
Examples based on real world datasets¶
Applications to real world problems with some medium sized datasets or interactive user interface.
Compressive sensing: tomography reconstruction with L1 prior (Lasso)
Faces recognition example using eigenfaces and SVMs
Image denoising using kernel PCA
Out-of-core classification of text documents
Outlier detection on a real data set
Time-related feature engineering
Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation
Visualizing the stock market structure
Wikipedia principal eigenvector
Feature Selection¶
Examples concerning the sklearn.feature_selection
module.
Comparison of F-test and mutual information
Model-based and sequential feature selection
Recursive feature elimination with cross-validation
Gaussian Mixture Models¶
Examples concerning the sklearn.mixture
module.
Concentration Prior Type Analysis of Variation Bayesian Gaussian Mixture
Density Estimation for a Gaussian mixture
Gaussian Mixture Model Ellipsoids
Gaussian Mixture Model Selection
Gaussian Mixture Model Sine Curve
Gaussian Process for Machine Learning¶
Examples concerning the sklearn.gaussian_process
module.
Ability of Gaussian process regression (GPR) to estimate data noise-level
Comparison of kernel ridge and Gaussian process regression
Forecasting of CO2 level on Mona Loa dataset using Gaussian process regression (GPR)
Gaussian Processes regression: basic introductory example
Gaussian process classification (GPC) on iris dataset
Gaussian processes on discrete data structures
Illustration of Gaussian process classification (GPC) on the XOR dataset
Illustration of prior and posterior Gaussian process for different kernels
Iso-probability lines for Gaussian Processes classification (GPC)
Probabilistic predictions with Gaussian process classification (GPC)
Generalized Linear Models¶
Examples concerning the sklearn.linear_model
module.
Comparing Linear Bayesian Regressors
Comparing various online solvers
Curve Fitting with Bayesian Ridge Regression
Early stopping of Stochastic Gradient Descent
Fitting an Elastic Net with a precomputed Gram Matrix and Weighted Samples
HuberRegressor vs Ridge on dataset with strong outliers
Joint feature selection with multi-task Lasso
L1 Penalty and Sparsity in Logistic Regression
L1-based models for Sparse Signals
Lasso model selection via information criteria
Lasso model selection: AIC-BIC / cross-validation
Lasso on dense and sparse data
Logistic Regression 3-class Classifier
MNIST classification using multinomial logistic + L1
Multiclass sparse logistic regression on 20newgroups
One-Class SVM versus One-Class SVM using Stochastic Gradient Descent
Ordinary Least Squares and Ridge Regression Variance
Plot Ridge coefficients as a function of the regularization
Plot multi-class SGD on the iris dataset
Plot multinomial and One-vs-Rest Logistic Regression
Poisson regression and non-normal loss
Polynomial and Spline interpolation
Regularization path of L1- Logistic Regression
Ridge coefficients as a function of the L2 Regularization
Robust linear estimator fitting
Robust linear model estimation using RANSAC
SGD: Maximum margin separating hyperplane
Sparsity Example: Fitting only features 1 and 2
Tweedie regression on insurance claims
Inspection¶
Examples related to the sklearn.inspection
module.
Common pitfalls in the interpretation of coefficients of linear models
Failure of Machine Learning to infer causal effects
Partial Dependence and Individual Conditional Expectation Plots
Permutation Importance vs Random Forest Feature Importance (MDI)
Permutation Importance with Multicollinear or Correlated Features
Kernel Approximation¶
Examples concerning the sklearn.kernel_approximation
module.
Scalable learning with polynomial kernel approximation
Manifold learning¶
Examples concerning the sklearn.manifold
module.
Comparison of Manifold Learning methods
Manifold Learning methods on a severed sphere
Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…
Swiss Roll And Swiss-Hole Reduction
t-SNE: The effect of various perplexity values on the shape
Miscellaneous¶
Miscellaneous and introductory examples for scikit-learn.
Advanced Plotting With Partial Dependence
Comparing anomaly detection algorithms for outlier detection on toy datasets
Comparison of kernel ridge regression and SVR
Displaying estimators and complex pipelines
Evaluation of outlier detection estimators
Explicit feature map approximation for RBF kernels
Face completion with a multi-output estimators
Introducing the set_output API
ROC Curve with Visualization API
The Johnson-Lindenstrauss bound for embedding with random projections
Visualizations with Display Objects
Missing Value Imputation¶
Examples concerning the sklearn.impute
module.
Imputing missing values before building an estimator
Imputing missing values with variants of IterativeImputer
Model Selection¶
Examples related to the sklearn.model_selection
module.
Balance model complexity and cross-validated score
Class Likelihood Ratios to measure classification performance
Comparing randomized search and grid search for hyperparameter estimation
Comparison between grid search and successive halving
Custom refit strategy of a grid search with cross-validation
Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV
Detection error tradeoff (DET) curve
Multiclass Receiver Operating Characteristic (ROC)
Nested versus non-nested cross-validation
Plotting Cross-Validated Predictions
Plotting Learning Curves and Checking Models’ Scalability
Receiver Operating Characteristic (ROC) with cross validation
Sample pipeline for text feature extraction and evaluation
Statistical comparison of models using grid search
Test with permutations the significance of a classification score
Visualizing cross-validation behavior in scikit-learn
Multioutput methods¶
Examples concerning the sklearn.multioutput
module.
Nearest Neighbors¶
Examples concerning the sklearn.neighbors
module.
Approximate nearest neighbors in TSNE
Comparing Nearest Neighbors with and without Neighborhood Components Analysis
Dimensionality Reduction with Neighborhood Components Analysis
Kernel Density Estimate of Species Distributions
Nearest Centroid Classification
Nearest Neighbors Classification
Neighborhood Components Analysis Illustration
Novelty detection with Local Outlier Factor (LOF)
Outlier detection with Local Outlier Factor (LOF)
Simple 1D Kernel Density Estimation
Neural Networks¶
Examples concerning the sklearn.neural_network
module.
Compare Stochastic learning strategies for MLPClassifier
Restricted Boltzmann Machine features for digit classification
Varying regularization in Multi-layer Perceptron
Visualization of MLP weights on MNIST
Pipelines and composite estimators¶
Examples of how to compose transformers and pipelines from other estimators. See the User Guide.
Column Transformer with Heterogeneous Data Sources
Column Transformer with Mixed Types
Concatenating multiple feature extraction methods
Effect of transforming the targets in regression model
Pipelining: chaining a PCA and a logistic regression
Selecting dimensionality reduction with Pipeline and GridSearchCV
Preprocessing¶
Examples concerning the sklearn.preprocessing
module.
Compare the effect of different scalers on data with outliers
Comparing Target Encoder with Other Encoders
Demonstrating the different strategies of KBinsDiscretizer
Map data to a normal distribution
Target Encoder’s Internal Cross fitting
Using KBinsDiscretizer to discretize continuous features
Semi Supervised Classification¶
Examples concerning the sklearn.semi_supervised
module.
Decision boundary of semi-supervised classifiers versus SVM on the Iris dataset
Effect of varying threshold for self-training
Label Propagation digits active learning
Label Propagation digits: Demonstrating performance
Label Propagation learning a complex structure
Semi-supervised Classification on a Text Dataset
Support Vector Machines¶
Examples concerning the sklearn.svm
module.
One-class SVM with non-linear kernel (RBF)
Plot classification boundaries with different SVM Kernels
Plot different SVM classifiers in the iris dataset
Plot the support vectors in LinearSVC
SVM-Anova: SVM with univariate feature selection
SVM: Maximum margin separating hyperplane
SVM: Separating hyperplane for unbalanced classes
Scaling the regularization parameter for SVCs
Support Vector Regression (SVR) using linear and non-linear kernels
Tutorial exercises¶
Exercises for the tutorials
Cross-validation on Digits Dataset Exercise
Cross-validation on diabetes Dataset Exercise
Digits Classification Exercise
Working with text documents¶
Examples concerning the sklearn.feature_extraction.text
module.
Classification of text documents using sparse features
Clustering text documents using k-means
FeatureHasher and DictVectorizer Comparison