sklearn.cross_decomposition.CCA¶
- class sklearn.cross_decomposition.CCA(n_components=2, *, scale=True, max_iter=500, tol=1e-06, copy=True)[source]¶
Canonical Correlation Analysis, also known as “Mode B” PLS.
Read more in the User Guide.
- Parameters:
- n_componentsint, default=2
Number of components to keep. Should be in
[1, min(n_samples, n_features, n_targets)].- scalebool, default=True
Whether to scale
XandY.- max_iterint, default=500
The maximum number of iterations of the power method.
- tolfloat, default=1e-06
The tolerance used as convergence criteria in the power method: the algorithm stops whenever the squared norm of
u_i - u_{i-1}is less thantol, whereucorresponds to the left singular vector.- copybool, default=True
Whether to copy
XandYin fit before applying centering, and potentially scaling. If False, these operations will be done inplace, modifying both arrays.
- Attributes:
- x_weights_ndarray of shape (n_features, n_components)
The left singular vectors of the cross-covariance matrices of each iteration.
- y_weights_ndarray of shape (n_targets, n_components)
The right singular vectors of the cross-covariance matrices of each iteration.
- x_loadings_ndarray of shape (n_features, n_components)
The loadings of
X.- y_loadings_ndarray of shape (n_targets, n_components)
The loadings of
Y.- x_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform
X.- y_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform
Y.coef_ndarray of shape (n_features, n_targets)The coefficients of the linear model.
- intercept_ndarray of shape (n_targets,)
The intercepts of the linear model such that
Yis approximated asY = X @ coef_ + intercept_.New in version 1.1.
- n_iter_list of shape (n_components,)
Number of iterations of the power method, for each component.
- n_features_in_int
Number of features seen during fit.
- feature_names_in_ndarray of shape (
n_features_in_,) Names of features seen during fit. Defined only when
Xhas feature names that are all strings.New in version 1.0.
See also
PLSCanonicalPartial Least Squares transformer and regressor.
PLSSVDPartial Least Square SVD.
Examples
>>> from sklearn.cross_decomposition import CCA >>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [3.,5.,4.]] >>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]] >>> cca = CCA(n_components=1) >>> cca.fit(X, Y) CCA(n_components=1) >>> X_c, Y_c = cca.transform(X, Y)
Methods
fit(X, Y)Fit model to data.
fit_transform(X[, y])Learn and apply the dimension reduction on the train data.
get_feature_names_out([input_features])Get output feature names for transformation.
get_params([deep])Get parameters for this estimator.
inverse_transform(X[, Y])Transform data back to its original space.
predict(X[, copy])Predict targets of given samples.
score(X, y[, sample_weight])Return the coefficient of determination of the prediction.
set_params(**params)Set the parameters of this estimator.
transform(X[, Y, copy])Apply the dimension reduction.
- property coef_¶
The coefficients of the linear model.
- fit(X, Y)[source]¶
Fit model to data.
- Parameters:
- Xarray-like of shape (n_samples, n_features)
Training vectors, where
n_samplesis the number of samples andn_featuresis the number of predictors.- Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where
n_samplesis the number of samples andn_targetsis the number of response variables.
- Returns:
- selfobject
Fitted model.
- fit_transform(X, y=None)[source]¶
Learn and apply the dimension reduction on the train data.
- Parameters:
- Xarray-like of shape (n_samples, n_features)
Training vectors, where
n_samplesis the number of samples andn_featuresis the number of predictors.- yarray-like of shape (n_samples, n_targets), default=None
Target vectors, where
n_samplesis the number of samples andn_targetsis the number of response variables.
- Returns:
- selfndarray of shape (n_samples, n_components)
Return
x_scoresifYis not given,(x_scores, y_scores)otherwise.
- get_feature_names_out(input_features=None)[source]¶
Get output feature names for transformation.
- Parameters:
- input_featuresarray-like of str or None, default=None
Only used to validate feature names with the names seen in
fit.
- Returns:
- feature_names_outndarray of str objects
Transformed feature names.
- get_params(deep=True)[source]¶
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- inverse_transform(X, Y=None)[source]¶
Transform data back to its original space.
- Parameters:
- Xarray-like of shape (n_samples, n_components)
New data, where
n_samplesis the number of samples andn_componentsis the number of pls components.- Yarray-like of shape (n_samples, n_components)
New target, where
n_samplesis the number of samples andn_componentsis the number of pls components.
- Returns:
- X_reconstructedndarray of shape (n_samples, n_features)
Return the reconstructed
Xdata.- Y_reconstructedndarray of shape (n_samples, n_targets)
Return the reconstructed
Xtarget. Only returned whenYis given.
Notes
This transformation will only be exact if
n_components=n_features.
- predict(X, copy=True)[source]¶
Predict targets of given samples.
- Parameters:
- Xarray-like of shape (n_samples, n_features)
Samples.
- copybool, default=True
Whether to copy
XandY, or perform in-place normalization.
- Returns:
- y_predndarray of shape (n_samples,) or (n_samples, n_targets)
Returns predicted values.
Notes
This call requires the estimation of a matrix of shape
(n_features, n_targets), which may be an issue in high dimensional space.
- score(X, y, sample_weight=None)[source]¶
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares
((y_true - y_pred)** 2).sum()and \(v\) is the total sum of squares((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value ofy, disregarding the input features, would get a \(R^2\) score of 0.0.- Parameters:
- Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted), wheren_samples_fittedis the number of samples used in the fitting for the estimator.- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for
X.- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
- Returns:
- scorefloat
\(R^2\) of
self.predict(X)wrt.y.
Notes
The \(R^2\) score used when calling
scoreon a regressor usesmultioutput='uniform_average'from version 0.23 to keep consistent with default value ofr2_score. This influences thescoremethod of all the multioutput regressors (except forMultiOutputRegressor).
- set_params(**params)[source]¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline). The latter have parameters of the form<component>__<parameter>so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- transform(X, Y=None, copy=True)[source]¶
Apply the dimension reduction.
- Parameters:
- Xarray-like of shape (n_samples, n_features)
Samples to transform.
- Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
- copybool, default=True
Whether to copy
XandY, or perform in-place normalization.
- Returns:
- x_scores, y_scoresarray-like or tuple of array-like
Return
x_scoresifYis not given,(x_scores, y_scores)otherwise.