sklearn.kernel_approximation.RBFSampler

class sklearn.kernel_approximation.RBFSampler(gamma=1.0, n_components=100, random_state=None)[source]

Approximates feature map of an RBF kernel by Monte Carlo approximation of its Fourier transform.

It implements a variant of Random Kitchen Sinks.[1]

Read more in the User Guide.

Parameters
gammafloat

Parameter of RBF kernel: exp(-gamma * x^2)

n_componentsint

Number of Monte Carlo samples per original feature. Equals the dimensionality of the computed feature space.

random_stateint, RandomState instance or None, optional (default=None)

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

Notes

See “Random Features for Large-Scale Kernel Machines” by A. Rahimi and Benjamin Recht.

[1] “Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning” by A. Rahimi and Benjamin Recht. (https://people.eecs.berkeley.edu/~brecht/papers/08.rah.rec.nips.pdf)

Examples

>>> from sklearn.kernel_approximation import RBFSampler
>>> from sklearn.linear_model import SGDClassifier
>>> X = [[0, 0], [1, 1], [1, 0], [0, 1]]
>>> y = [0, 0, 1, 1]
>>> rbf_feature = RBFSampler(gamma=1, random_state=1)
>>> X_features = rbf_feature.fit_transform(X)
>>> clf = SGDClassifier(max_iter=5, tol=1e-3)
>>> clf.fit(X_features, y)
SGDClassifier(max_iter=5)
>>> clf.score(X_features, y)
1.0

Methods

fit(self, X[, y])

Fit the model with X.

fit_transform(self, X[, y])

Fit to data, then transform it.

get_params(self[, deep])

Get parameters for this estimator.

set_params(self, \*\*params)

Set the parameters of this estimator.

transform(self, X)

Apply the approximate feature map to X.

__init__(self, gamma=1.0, n_components=100, random_state=None)[source]

Initialize self. See help(type(self)) for accurate signature.

fit(self, X, y=None)[source]

Fit the model with X.

Samples random projection according to n_features.

Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)

Training data, where n_samples in the number of samples and n_features is the number of features.

Returns
selfobject

Returns the transformer.

fit_transform(self, X, y=None, **fit_params)[source]

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters
Xnumpy array of shape [n_samples, n_features]

Training set.

ynumpy array of shape [n_samples]

Target values.

**fit_paramsdict

Additional fit parameters.

Returns
X_newnumpy array of shape [n_samples, n_features_new]

Transformed array.

get_params(self, deep=True)[source]

Get parameters for this estimator.

Parameters
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsmapping of string to any

Parameter names mapped to their values.

set_params(self, **params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters
**paramsdict

Estimator parameters.

Returns
selfobject

Estimator instance.

transform(self, X)[source]

Apply the approximate feature map to X.

Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)

New data, where n_samples in the number of samples and n_features is the number of features.

Returns
X_newarray-like, shape (n_samples, n_components)

Examples using sklearn.kernel_approximation.RBFSampler