sklearn.pipeline
.make_pipeline¶
- sklearn.pipeline.make_pipeline(*steps, memory=None, verbose=False)[source]¶
Construct a
Pipeline
from the given estimators.This is a shorthand for the
Pipeline
constructor; it does not require, and does not permit, naming the estimators. Instead, their names will be set to the lowercase of their types automatically.- Parameters:
- *stepslist of Estimator objects
List of the scikit-learn estimators that are chained together.
- memorystr or object with the joblib.Memory interface, default=None
Used to cache the fitted transformers of the pipeline. The last step will never be cached, even if it is a transformer. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute
named_steps
orsteps
to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming.- verbosebool, default=False
If True, the time elapsed while fitting each step will be printed as it is completed.
- Returns:
- pPipeline
Returns a scikit-learn
Pipeline
object.
See also
Pipeline
Class for creating a pipeline of transforms with a final estimator.
Examples
>>> from sklearn.naive_bayes import GaussianNB >>> from sklearn.preprocessing import StandardScaler >>> from sklearn.pipeline import make_pipeline >>> make_pipeline(StandardScaler(), GaussianNB(priors=None)) Pipeline(steps=[('standardscaler', StandardScaler()), ('gaussiannb', GaussianNB())])
Examples using sklearn.pipeline.make_pipeline
¶
Release Highlights for scikit-learn 1.2
Release Highlights for scikit-learn 1.1
Release Highlights for scikit-learn 1.0
Release Highlights for scikit-learn 0.24
Release Highlights for scikit-learn 0.23
Release Highlights for scikit-learn 0.22
A demo of K-Means clustering on the handwritten digits data
Principal Component Regression vs Partial Least Squares Regression
Categorical Feature Support in Gradient Boosting
Combine predictors using stacking
Feature transformations with ensembles of trees
Time-related feature engineering
Model-based and sequential feature selection
Comparing Linear Bayesian Regressors
Lasso model selection via information criteria
Lasso model selection: AIC-BIC / cross-validation
One-Class SVM versus One-Class SVM using Stochastic Gradient Descent
Poisson regression and non-normal loss
Polynomial and Spline interpolation
Robust linear estimator fitting
Tweedie regression on insurance claims
Common pitfalls in the interpretation of coefficients of linear models
Partial Dependence and Individual Conditional Expectation Plots
Scalable learning with polynomial kernel approximation
Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…
Advanced Plotting With Partial Dependence
Comparing anomaly detection algorithms for outlier detection on toy datasets
Displaying estimators and complex pipelines
Evaluation of outlier detection estimators
Introducing the set_output API
Visualizations with Display Objects
Imputing missing values before building an estimator
Imputing missing values with variants of IterativeImputer
Detection error tradeoff (DET) curve
Approximate nearest neighbors in TSNE
Dimensionality Reduction with Neighborhood Components Analysis
Varying regularization in Multi-layer Perceptron
Comparing Target Encoder with Other Encoders
Target Encoder’s Internal Cross fitting
Clustering text documents using k-means