sklearn.feature_selection.SequentialFeatureSelector

class sklearn.feature_selection.SequentialFeatureSelector(estimator, *, n_features_to_select='warn', tol=None, direction='forward', scoring=None, cv=5, n_jobs=None)[source]

Transformer that performs Sequential Feature Selection.

This Sequential Feature Selector adds (forward selection) or removes (backward selection) features to form a feature subset in a greedy fashion. At each stage, this estimator chooses the best feature to add or remove based on the cross-validation score of an estimator. In the case of unsupervised learning, this Sequential Feature Selector looks only at the features (X), not the desired outputs (y).

Read more in the User Guide.

New in version 0.24.

Parameters:
estimatorestimator instance

An unfitted estimator.

n_features_to_select“auto”, int or float, default=’warn’

If "auto", the behaviour depends on the tol parameter:

  • if tol is not None, then features are selected until the score improvement does not exceed tol.

  • otherwise, half of the features are selected.

If integer, the parameter is the absolute number of features to select. If float between 0 and 1, it is the fraction of features to select.

New in version 1.1: The option "auto" was added in version 1.1.

Deprecated since version 1.1: The default changed from None to "warn" in 1.1 and will become "auto" in 1.3. None and 'warn' will be removed in 1.3. To keep the same behaviour as None, set n_features_to_select="auto" and `tol=None.

tolfloat, default=None

If the score is not incremented by at least tol between two consecutive feature additions or removals, stop adding or removing. tol is enabled only when n_features_to_select is "auto".

New in version 1.1.

direction{‘forward’, ‘backward’}, default=’forward’

Whether to perform forward selection or backward selection.

scoringstr, callable, list/tuple or dict, default=None

A single str (see The scoring parameter: defining model evaluation rules) or a callable (see Defining your scoring strategy from metric functions) to evaluate the predictions on the test set.

NOTE that when using custom scorers, each scorer should return a single value. Metric functions returning a list/array of values can be wrapped into multiple scorers that return one value each.

If None, the estimator’s score method is used.

cvint, cross-validation generator or an iterable, default=None

Determines the cross-validation splitting strategy. Possible inputs for cv are:

  • None, to use the default 5-fold cross validation,

  • integer, to specify the number of folds in a (Stratified)KFold,

  • CV splitter,

  • An iterable yielding (train, test) splits as arrays of indices.

For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. These splitters are instantiated with shuffle=False so the splits will be the same across calls.

Refer User Guide for the various cross-validation strategies that can be used here.

n_jobsint, default=None

Number of jobs to run in parallel. When evaluating a new feature to add or remove, the cross-validation procedure is parallel over the folds. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.

Attributes:
n_features_in_int

Number of features seen during fit. Only defined if the underlying estimator exposes such an attribute when fit.

New in version 0.24.

feature_names_in_ndarray of shape (n_features_in_,)

Names of features seen during fit. Defined only when X has feature names that are all strings.

New in version 1.0.

n_features_to_select_int

The number of features that were selected.

support_ndarray of shape (n_features,), dtype=bool

The mask of selected features.

See also

GenericUnivariateSelect

Univariate feature selector with configurable strategy.

RFE

Recursive feature elimination based on importance weights.

RFECV

Recursive feature elimination based on importance weights, with automatic selection of the number of features.

SelectFromModel

Feature selection based on thresholds of importance weights.

Examples

>>> from sklearn.feature_selection import SequentialFeatureSelector
>>> from sklearn.neighbors import KNeighborsClassifier
>>> from sklearn.datasets import load_iris
>>> X, y = load_iris(return_X_y=True)
>>> knn = KNeighborsClassifier(n_neighbors=3)
>>> sfs = SequentialFeatureSelector(knn, n_features_to_select=3)
>>> sfs.fit(X, y)
SequentialFeatureSelector(estimator=KNeighborsClassifier(n_neighbors=3),
                          n_features_to_select=3)
>>> sfs.get_support()
array([ True, False,  True,  True])
>>> sfs.transform(X).shape
(150, 3)

Methods

fit(X[, y])

Learn the features to select from X.

fit_transform(X[, y])

Fit to data, then transform it.

get_feature_names_out([input_features])

Mask feature names according to selected features.

get_params([deep])

Get parameters for this estimator.

get_support([indices])

Get a mask, or integer index, of the features selected.

inverse_transform(X)

Reverse the transformation operation.

set_params(**params)

Set the parameters of this estimator.

transform(X)

Reduce X to the selected features.

fit(X, y=None)[source]

Learn the features to select from X.

Parameters:
Xarray-like of shape (n_samples, n_features)

Training vectors, where n_samples is the number of samples and n_features is the number of predictors.

yarray-like of shape (n_samples,), default=None

Target values. This parameter may be ignored for unsupervised learning.

Returns:
selfobject

Returns the instance itself.

fit_transform(X, y=None, **fit_params)[source]

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters:
Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

**fit_paramsdict

Additional fit parameters.

Returns:
X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_feature_names_out(input_features=None)[source]

Mask feature names according to selected features.

Parameters:
input_featuresarray-like of str or None, default=None

Input features.

  • If input_features is None, then feature_names_in_ is used as feature names in. If feature_names_in_ is not defined, then the following input feature names are generated: ["x0", "x1", ..., "x(n_features_in_ - 1)"].

  • If input_features is an array-like, then input_features must match feature_names_in_ if feature_names_in_ is defined.

Returns:
feature_names_outndarray of str objects

Transformed feature names.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

get_support(indices=False)[source]

Get a mask, or integer index, of the features selected.

Parameters:
indicesbool, default=False

If True, the return value will be an array of integers, rather than a boolean mask.

Returns:
supportarray

An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.

inverse_transform(X)[source]

Reverse the transformation operation.

Parameters:
Xarray of shape [n_samples, n_selected_features]

The input samples.

Returns:
X_rarray of shape [n_samples, n_original_features]

X with columns of zeros inserted where features would have been removed by transform.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

transform(X)[source]

Reduce X to the selected features.

Parameters:
Xarray of shape [n_samples, n_features]

The input samples.

Returns:
X_rarray of shape [n_samples, n_selected_features]

The input samples with only the selected features.

Examples using sklearn.feature_selection.SequentialFeatureSelector

Release Highlights for scikit-learn 0.24

Release Highlights for scikit-learn 0.24

Release Highlights for scikit-learn 0.24
Model-based and sequential feature selection

Model-based and sequential feature selection

Model-based and sequential feature selection