.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/ensemble/plot_feature_transformation.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code or to run this example in your browser via JupyterLite or Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_ensemble_plot_feature_transformation.py: =============================================== Feature transformations with ensembles of trees =============================================== Transform your features into a higher dimensional, sparse space. Then train a linear model on these features. First fit an ensemble of trees (totally random trees, a random forest, or gradient boosted trees) on the training set. Then each leaf of each tree in the ensemble is assigned a fixed arbitrary feature index in a new feature space. These leaf indices are then encoded in a one-hot fashion. Each sample goes through the decisions of each tree of the ensemble and ends up in one leaf per tree. The sample is encoded by setting feature values for these leaves to 1 and the other feature values to 0. The resulting transformer has then learned a supervised, sparse, high-dimensional categorical embedding of the data. .. GENERATED FROM PYTHON SOURCE LINES 22-28 .. code-block:: Python # Author: Tim Head # # License: BSD 3 clause .. GENERATED FROM PYTHON SOURCE LINES 29-38 First, we will create a large dataset and split it into three sets: - a set to train the ensemble methods which are later used to as a feature engineering transformer; - a set to train the linear model; - a set to test the linear model. It is important to split the data in such way to avoid overfitting by leaking data. .. GENERATED FROM PYTHON SOURCE LINES 38-51 .. code-block:: Python from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split X, y = make_classification(n_samples=80_000, random_state=10) X_full_train, X_test, y_full_train, y_test = train_test_split( X, y, test_size=0.5, random_state=10 ) X_train_ensemble, X_train_linear, y_train_ensemble, y_train_linear = train_test_split( X_full_train, y_full_train, test_size=0.5, random_state=10 ) .. GENERATED FROM PYTHON SOURCE LINES 52-54 For each of the ensemble methods, we will use 10 estimators and a maximum depth of 3 levels. .. GENERATED FROM PYTHON SOURCE LINES 54-58 .. code-block:: Python n_estimators = 10 max_depth = 3 .. GENERATED FROM PYTHON SOURCE LINES 59-61 First, we will start by training the random forest and gradient boosting on the separated training set .. GENERATED FROM PYTHON SOURCE LINES 61-74 .. code-block:: Python from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier random_forest = RandomForestClassifier( n_estimators=n_estimators, max_depth=max_depth, random_state=10 ) random_forest.fit(X_train_ensemble, y_train_ensemble) gradient_boosting = GradientBoostingClassifier( n_estimators=n_estimators, max_depth=max_depth, random_state=10 ) _ = gradient_boosting.fit(X_train_ensemble, y_train_ensemble) .. GENERATED FROM PYTHON SOURCE LINES 75-82 Notice that :class:`~sklearn.ensemble.HistGradientBoostingClassifier` is much faster than :class:`~sklearn.ensemble.GradientBoostingClassifier` starting with intermediate datasets (`n_samples >= 10_000`), which is not the case of the present example. The :class:`~sklearn.ensemble.RandomTreesEmbedding` is an unsupervised method and thus does not required to be trained independently. .. GENERATED FROM PYTHON SOURCE LINES 82-89 .. code-block:: Python from sklearn.ensemble import RandomTreesEmbedding random_tree_embedding = RandomTreesEmbedding( n_estimators=n_estimators, max_depth=max_depth, random_state=0 ) .. GENERATED FROM PYTHON SOURCE LINES 90-95 Now, we will create three pipelines that will use the above embedding as a preprocessing stage. The random trees embedding can be directly pipelined with the logistic regression because it is a standard scikit-learn transformer. .. GENERATED FROM PYTHON SOURCE LINES 95-102 .. code-block:: Python from sklearn.linear_model import LogisticRegression from sklearn.pipeline import make_pipeline rt_model = make_pipeline(random_tree_embedding, LogisticRegression(max_iter=1000)) rt_model.fit(X_train_linear, y_train_linear) .. raw:: html
Pipeline(steps=[('randomtreesembedding',
                     RandomTreesEmbedding(max_depth=3, n_estimators=10,
                                          random_state=0)),
                    ('logisticregression', LogisticRegression(max_iter=1000))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 103-107 Then, we can pipeline random forest or gradient boosting with a logistic regression. However, the feature transformation will happen by calling the method `apply`. The pipeline in scikit-learn expects a call to `transform`. Therefore, we wrapped the call to `apply` within a `FunctionTransformer`. .. GENERATED FROM PYTHON SOURCE LINES 107-125 .. code-block:: Python from sklearn.preprocessing import FunctionTransformer, OneHotEncoder def rf_apply(X, model): return model.apply(X) rf_leaves_yielder = FunctionTransformer(rf_apply, kw_args={"model": random_forest}) rf_model = make_pipeline( rf_leaves_yielder, OneHotEncoder(handle_unknown="ignore"), LogisticRegression(max_iter=1000), ) rf_model.fit(X_train_linear, y_train_linear) .. raw:: html
Pipeline(steps=[('functiontransformer',
                     FunctionTransformer(func=<function rf_apply at 0x7f39f04cd0d0>,
                                         kw_args={'model': RandomForestClassifier(max_depth=3,
                                                                                  n_estimators=10,
                                                                                  random_state=10)})),
                    ('onehotencoder', OneHotEncoder(handle_unknown='ignore')),
                    ('logisticregression', LogisticRegression(max_iter=1000))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 126-141 .. code-block:: Python def gbdt_apply(X, model): return model.apply(X)[:, :, 0] gbdt_leaves_yielder = FunctionTransformer( gbdt_apply, kw_args={"model": gradient_boosting} ) gbdt_model = make_pipeline( gbdt_leaves_yielder, OneHotEncoder(handle_unknown="ignore"), LogisticRegression(max_iter=1000), ) gbdt_model.fit(X_train_linear, y_train_linear) .. raw:: html
Pipeline(steps=[('functiontransformer',
                     FunctionTransformer(func=<function gbdt_apply at 0x7f39f091c670>,
                                         kw_args={'model': GradientBoostingClassifier(n_estimators=10,
                                                                                      random_state=10)})),
                    ('onehotencoder', OneHotEncoder(handle_unknown='ignore')),
                    ('logisticregression', LogisticRegression(max_iter=1000))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 142-143 We can finally show the different ROC curves for all the models. .. GENERATED FROM PYTHON SOURCE LINES 143-165 .. code-block:: Python import matplotlib.pyplot as plt from sklearn.metrics import RocCurveDisplay _, ax = plt.subplots() models = [ ("RT embedding -> LR", rt_model), ("RF", random_forest), ("RF embedding -> LR", rf_model), ("GBDT", gradient_boosting), ("GBDT embedding -> LR", gbdt_model), ] model_displays = {} for name, pipeline in models: model_displays[name] = RocCurveDisplay.from_estimator( pipeline, X_test, y_test, ax=ax, name=name ) _ = ax.set_title("ROC curve") .. image-sg:: /auto_examples/ensemble/images/sphx_glr_plot_feature_transformation_001.png :alt: ROC curve :srcset: /auto_examples/ensemble/images/sphx_glr_plot_feature_transformation_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 166-173 .. code-block:: Python _, ax = plt.subplots() for name, pipeline in models: model_displays[name].plot(ax=ax) ax.set_xlim(0, 0.2) ax.set_ylim(0.8, 1) _ = ax.set_title("ROC curve (zoomed in at top left)") .. image-sg:: /auto_examples/ensemble/images/sphx_glr_plot_feature_transformation_002.png :alt: ROC curve (zoomed in at top left) :srcset: /auto_examples/ensemble/images/sphx_glr_plot_feature_transformation_002.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 2.755 seconds) .. _sphx_glr_download_auto_examples_ensemble_plot_feature_transformation.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/scikit-learn/scikit-learn/main?urlpath=lab/tree/notebooks/auto_examples/ensemble/plot_feature_transformation.ipynb :alt: Launch binder :width: 150 px .. container:: lite-badge .. image:: images/jupyterlite_badge_logo.svg :target: ../../lite/lab/?path=auto_examples/ensemble/plot_feature_transformation.ipynb :alt: Launch JupyterLite :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_feature_transformation.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_feature_transformation.py ` .. include:: plot_feature_transformation.recommendations .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_