class sklearn.decomposition.FactorAnalysis(n_components=None, tol=0.01, copy=True, max_iter=1000, noise_variance_init=None, svd_method='randomized', iterated_power=3, random_state=0)[source]

Factor Analysis (FA)

A simple linear generative model with Gaussian latent variables.

The observations are assumed to be caused by a linear transformation of lower dimensional latent factors and added Gaussian noise. Without loss of generality the factors are distributed according to a Gaussian with zero mean and unit covariance. The noise is also zero mean and has an arbitrary diagonal covariance matrix.

If we would restrict the model further, by assuming that the Gaussian noise is even isotropic (all diagonal entries are the same) we would obtain PPCA.

FactorAnalysis performs a maximum likelihood estimate of the so-called loading matrix, the transformation of the latent variables to the observed ones, using expectation-maximization (EM).


n_components : int | None

Dimensionality of latent space, the number of components of X that are obtained after transform. If None, n_components is set to the number of features.

tol : float

Stopping tolerance for EM algorithm.

copy : bool

Whether to make a copy of X. If False, the input X gets overwritten during fitting.

max_iter : int

Maximum number of iterations.

noise_variance_init : None | array, shape=(n_features,)

The initial guess of the noise variance for each feature. If None, it defaults to np.ones(n_features)

svd_method : {‘lapack’, ‘randomized’}

Which SVD method to use. If ‘lapack’ use standard SVD from scipy.linalg, if ‘randomized’ use fast randomized_svd function. Defaults to ‘randomized’. For most applications ‘randomized’ will be sufficiently precise while providing significant speed gains. Accuracy can also be improved by setting higher values for iterated_power. If this is not sufficient, for maximum precision you should choose ‘lapack’.

iterated_power : int, optional

Number of iterations for the power method. 3 by default. Only used if svd_method equals ‘randomized’

random_state : int or RandomState

Pseudo number generator state used for random sampling. Only used if svd_method equals ‘randomized’


components_ : array, [n_components, n_features]

Components with maximum variance.

loglike_ : list, [n_iterations]

The log likelihood at each iteration.

noise_variance_ : array, shape=(n_features,)

The estimated noise variance for each feature.

n_iter_ : int

Number of iterations run.

See also

Principal component analysis is also a latent linear variable model which however assumes equal noise variance for each feature. This extra assumption makes probabilistic PCA faster as it can be computed in closed form.
Independent component analysis, a latent variable model with non-Gaussian latent variables.



__init__(n_components=None, tol=0.01, copy=True, max_iter=1000, noise_variance_init=None, svd_method='randomized', iterated_power=3, random_state=0)[source]
fit(X, y=None)[source]

Fit the FactorAnalysis model to X using EM


X : array-like, shape (n_samples, n_features)

Training data.


self :

fit_transform(X, y=None, **fit_params)[source]

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.


X : numpy array of shape [n_samples, n_features]

Training set.

y : numpy array of shape [n_samples]

Target values.


X_new : numpy array of shape [n_samples, n_features_new]

Transformed array.


Compute data covariance with the FactorAnalysis model.

cov = components_.T * components_ + diag(noise_variance)


cov : array, shape (n_features, n_features)

Estimated covariance of data.


Get parameters for this estimator.


deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.


params : mapping of string to any

Parameter names mapped to their values.


Compute data precision matrix with the FactorAnalysis model.


precision : array, shape (n_features, n_features)

Estimated precision of data.

score(X, y=None)[source]

Compute the average log-likelihood of the samples


X: array, shape (n_samples, n_features) :

The data


ll: float :

Average log-likelihood of the samples under the current model


Compute the log-likelihood of each sample


X: array, shape (n_samples, n_features) :

The data


ll: array, shape (n_samples,) :

Log-likelihood of each sample under the current model


Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:self :

Apply dimensionality reduction to X using the model.

Compute the expected mean of the latent variables. See Barber, 21.2.33 (or Bishop, 12.66).


X : array-like, shape (n_samples, n_features)

Training data.


X_new : array-like, shape (n_samples, n_components)

The latent variables of X.