# sklearn.decomposition.non_negative_factorization¶

sklearn.decomposition.non_negative_factorization(X, W=None, H=None, n_components=None, *, init='warn', update_H=True, solver='cd', beta_loss='frobenius', tol=0.0001, max_iter=200, alpha='deprecated', alpha_W=0.0, alpha_H='same', l1_ratio=0.0, regularization='deprecated', random_state=None, verbose=0, shuffle=False)[source]

Compute Non-negative Matrix Factorization (NMF).

Find two non-negative matrices (W, H) whose product approximates the non- negative matrix X. This factorization can be used for example for dimensionality reduction, source separation or topic extraction.

The objective function is:

\begin{align}\begin{aligned}0.5 * ||X - WH||_{loss}^2\\+ alpha\_W * l1_{ratio} * n\_features * ||vec(W)||_1\\+ alpha\_H * l1_{ratio} * n\_samples * ||vec(H)||_1\\+ 0.5 * alpha\_W * (1 - l1_{ratio}) * n\_features * ||W||_{Fro}^2\\+ 0.5 * alpha\_H * (1 - l1_{ratio}) * n\_samples * ||H||_{Fro}^2\end{aligned}\end{align}

Where:

$$||A||_{Fro}^2 = \sum_{i,j} A_{ij}^2$$ (Frobenius norm)

$$||vec(A)||_1 = \sum_{i,j} abs(A_{ij})$$ (Elementwise L1 norm)

The generic norm $$||X - WH||_{loss}^2$$ may represent the Frobenius norm or another supported beta-divergence loss. The choice between options is controlled by the beta_loss parameter.

The regularization terms are scaled by n_features for W and by n_samples for H to keep their impact balanced with respect to one another and to the data fit term as independent as possible of the size n_samples of the training set.

The objective function is minimized with an alternating minimization of W and H. If H is given and update_H=False, it solves for W only.

Parameters
Xarray-like of shape (n_samples, n_features)

Constant matrix.

Warray-like of shape (n_samples, n_components), default=None

If init=’custom’, it is used as initial guess for the solution.

Harray-like of shape (n_components, n_features), default=None

If init=’custom’, it is used as initial guess for the solution. If update_H=False, it is used as a constant, to solve for W only.

n_componentsint, default=None

Number of components, if n_components is not set all features are kept.

init{‘random’, ‘nndsvd’, ‘nndsvda’, ‘nndsvdar’, ‘custom’}, default=None

Method used to initialize the procedure.

Valid options:

• None: ‘nndsvd’ if n_components < n_features, otherwise ‘random’.

• ‘random’: non-negative random matrices, scaled with:

sqrt(X.mean() / n_components)

• ‘nndsvd’: Nonnegative Double Singular Value Decomposition (NNDSVD)

initialization (better for sparseness)

• ‘nndsvda’: NNDSVD with zeros filled with the average of X

(better when sparsity is not desired)

• ‘nndsvdar’: NNDSVD with zeros filled with small random values

(generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired)

• ‘custom’: use custom matrices W and H if update_H=True. If update_H=False, then only custom matrix H is used.

Changed in version 0.23: The default value of init changed from ‘random’ to None in 0.23.

update_Hbool, default=True

Set to True, both W and H will be estimated from initial guesses. Set to False, only W will be estimated.

solver{‘cd’, ‘mu’}, default=’cd’

Numerical solver to use:

• ‘cd’ is a Coordinate Descent solver that uses Fast Hierarchical

Alternating Least Squares (Fast HALS).

• ‘mu’ is a Multiplicative Update solver.

New in version 0.17: Coordinate Descent solver.

New in version 0.19: Multiplicative Update solver.

beta_lossfloat or {‘frobenius’, ‘kullback-leibler’, ‘itakura-saito’}, default=’frobenius’

Beta divergence to be minimized, measuring the distance between X and the dot product WH. Note that values different from ‘frobenius’ (or 2) and ‘kullback-leibler’ (or 1) lead to significantly slower fits. Note that for beta_loss <= 0 (or ‘itakura-saito’), the input matrix X cannot contain zeros. Used only in ‘mu’ solver.

New in version 0.19.

tolfloat, default=1e-4

Tolerance of the stopping condition.

max_iterint, default=200

Maximum number of iterations before timing out.

alphafloat, default=0.0

Constant that multiplies the regularization terms. Set it to zero to have no regularization. When using alpha instead of alpha_W and alpha_H, the regularization terms are not scaled by the n_features (resp. n_samples) factors for W (resp. H).

Deprecated since version 1.0: The alpha parameter is deprecated in 1.0 and will be removed in 1.2. Use alpha_W and alpha_H instead.

alpha_Wfloat, default=0.0

Constant that multiplies the regularization terms of W. Set it to zero (default) to have no regularization on W.

New in version 1.0.

alpha_Hfloat or “same”, default=”same”

Constant that multiplies the regularization terms of H. Set it to zero to have no regularization on H. If “same” (default), it takes the same value as alpha_W.

New in version 1.0.

l1_ratiofloat, default=0.0

The regularization mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an elementwise L2 penalty (aka Frobenius Norm). For l1_ratio = 1 it is an elementwise L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2.

regularization{‘both’, ‘components’, ‘transformation’}, default=None

Select whether the regularization affects the components (H), the transformation (W), both or none of them.

Deprecated since version 1.0: The regularization parameter is deprecated in 1.0 and will be removed in 1.2. Use alpha_W and alpha_H instead.

random_stateint, RandomState instance or None, default=None

Used for NMF initialisation (when init == ‘nndsvdar’ or ‘random’), and in Coordinate Descent. Pass an int for reproducible results across multiple function calls. See Glossary.

verboseint, default=0

The verbosity level.

shufflebool, default=False

If true, randomize the order of coordinates in the CD solver.

Returns
Wndarray of shape (n_samples, n_components)

Solution to the non-negative least squares problem.

Hndarray of shape (n_components, n_features)

Solution to the non-negative least squares problem.

n_iterint

Actual number of iterations.

References

Cichocki, Andrzej, and P. H. A. N. Anh-Huy. “Fast local algorithms for large scale nonnegative matrix and tensor factorizations.” IEICE transactions on fundamentals of electronics, communications and computer sciences 92.3: 708-721, 2009.

Fevotte, C., & Idier, J. (2011). Algorithms for nonnegative matrix factorization with the beta-divergence. Neural Computation, 23(9).

Examples

>>> import numpy as np
>>> X = np.array([[1,1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]])
>>> from sklearn.decomposition import non_negative_factorization
>>> W, H, n_iter = non_negative_factorization(X, n_components=2,
... init='random', random_state=0)