sklearn.feature_selection.r_regression(X, y, *, center=True, force_finite=True)[source]#

Compute Pearson’s r for each features and the target.

Pearson’s r is also known as the Pearson correlation coefficient.

Linear model for testing the individual effect of each of many regressors. This is a scoring function to be used in a feature selection procedure, not a free standing feature selection procedure.

The cross correlation between each regressor and the target is computed as:

E[(X[:, i] - mean(X[:, i])) * (y - mean(y))] / (std(X[:, i]) * std(y))

For more on usage see the User Guide.

Added in version 1.0.

X{array-like, sparse matrix} of shape (n_samples, n_features)

The data matrix.

yarray-like of shape (n_samples,)

The target vector.

centerbool, default=True

Whether or not to center the data matrix X and the target vector y. By default, X and y will be centered.

force_finitebool, default=True

Whether or not to force the Pearson’s R correlation to be finite. In the particular case where some features in X or the target y are constant, the Pearson’s R correlation is not defined. When force_finite=False, a correlation of np.nan is returned to acknowledge this case. When force_finite=True, this value will be forced to a minimal correlation of 0.0.

Added in version 1.1.

correlation_coefficientndarray of shape (n_features,)

Pearson’s R correlation coefficients of features.

See also


Univariate linear regression tests returning f-statistic and p-values.


Mutual information for a continuous target.


ANOVA F-value between label/feature for classification tasks.


Chi-squared stats of non-negative features for classification tasks.


>>> from sklearn.datasets import make_regression
>>> from sklearn.feature_selection import r_regression
>>> X, y = make_regression(
...     n_samples=50, n_features=3, n_informative=1, noise=1e-4, random_state=42
... )
>>> r_regression(X, y)
array([-0.15...,  1.        , -0.22...])