sklearn.decomposition.dict_learning_online¶

sklearn.decomposition.dict_learning_online(X, n_components=2, *, alpha=1, n_iter='deprecated', max_iter=None, return_code=True, dict_init=None, callback=None, batch_size='warn', verbose=False, shuffle=True, n_jobs=None, method='lars', iter_offset='deprecated', random_state=None, return_inner_stats='deprecated', inner_stats='deprecated', return_n_iter='deprecated', positive_dict=False, positive_code=False, method_max_iter=1000, tol=0.001, max_no_improvement=10)[source]

Solve a dictionary learning matrix factorization problem online.

Finds the best dictionary and the corresponding sparse code for approximating the data matrix X by solving:

(U^*, V^*) = argmin 0.5 || X - U V ||_Fro^2 + alpha * || U ||_1,1
(U,V)
with || V_k ||_2 = 1 for all  0 <= k < n_components


where V is the dictionary and U is the sparse code. ||.||_Fro stands for the Frobenius norm and ||.||_1,1 stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix. This is accomplished by repeatedly iterating over mini-batches by slicing the input data.

Read more in the User Guide.

Parameters:
Xndarray of shape (n_samples, n_features)

Data matrix.

n_componentsint or None, default=2

Number of dictionary atoms to extract. If None, then n_components is set to n_features.

alphafloat, default=1

Sparsity controlling parameter.

n_iterint, default=100

Number of mini-batch iterations to perform.

Deprecated since version 1.1: n_iter is deprecated in 1.1 and will be removed in 1.4. Use max_iter instead.

max_iterint, default=None

Maximum number of iterations over the complete dataset before stopping independently of any early stopping criterion heuristics. If max_iter is not None, n_iter is ignored.

New in version 1.1.

return_codebool, default=True

Whether to also return the code U or just the dictionary V.

dict_initndarray of shape (n_components, n_features), default=None

Initial values for the dictionary for warm restart scenarios. If None, the initial values for the dictionary are created with an SVD decomposition of the data via randomized_svd.

callbackcallable, default=None

A callable that gets invoked at the end of each iteration.

batch_sizeint, default=3

The number of samples to take in each batch.

Changed in version 1.3: The default value of batch_size will change from 3 to 256 in version 1.3.

verbosebool, default=False

To control the verbosity of the procedure.

shufflebool, default=True

Whether to shuffle the data before splitting it in batches.

n_jobsint, default=None

Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.

method{‘lars’, ‘cd’}, default=’lars’
• 'lars': uses the least angle regression method to solve the lasso problem (linear_model.lars_path);

• 'cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.

iter_offsetint, default=0

Number of previous iterations completed on the dictionary used for initialization.

Deprecated since version 1.1: iter_offset serves internal purpose only and will be removed in 1.3.

random_stateint, RandomState instance or None, default=None

Used for initializing the dictionary when dict_init is not specified, randomly shuffling the data when shuffle is set to True, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See Glossary.

return_inner_statsbool, default=False

Return the inner statistics A (dictionary covariance) and B (data approximation). Useful to restart the algorithm in an online setting. If return_inner_stats is True, return_code is ignored.

Deprecated since version 1.1: return_inner_stats serves internal purpose only and will be removed in 1.3.

inner_statstuple of (A, B) ndarrays, default=None

Inner sufficient statistics that are kept by the algorithm. Passing them at initialization is useful in online settings, to avoid losing the history of the evolution. A (n_components, n_components) is the dictionary covariance matrix. B (n_features, n_components) is the data approximation matrix.

Deprecated since version 1.1: inner_stats serves internal purpose only and will be removed in 1.3.

return_n_iterbool, default=False

Whether or not to return the number of iterations.

Deprecated since version 1.1: return_n_iter will be removed in 1.3 and n_iter will always be returned.

positive_dictbool, default=False

Whether to enforce positivity when finding the dictionary.

New in version 0.20.

positive_codebool, default=False

Whether to enforce positivity when finding the code.

New in version 0.20.

method_max_iterint, default=1000

Maximum number of iterations to perform when solving the lasso problem.

New in version 0.22.

tolfloat, default=1e-3

Control early stopping based on the norm of the differences in the dictionary between 2 steps. Used only if max_iter is not None.

To disable early stopping based on changes in the dictionary, set tol to 0.0.

New in version 1.1.

max_no_improvementint, default=10

Control early stopping based on the consecutive number of mini batches that does not yield an improvement on the smoothed cost function. Used only if max_iter is not None.

To disable convergence detection based on cost function, set max_no_improvement to None.

New in version 1.1.

Returns:
codendarray of shape (n_samples, n_components),

The sparse code (only returned if return_code=True).

dictionaryndarray of shape (n_components, n_features),

The solutions to the dictionary learning problem.

n_iterint

Number of iterations run. Returned only if return_n_iter is set to True.

dict_learning

Solve a dictionary learning matrix factorization problem.

DictionaryLearning

Find a dictionary that sparsely encodes data.

MiniBatchDictionaryLearning

A faster, less accurate, version of the dictionary learning algorithm.

SparsePCA

Sparse Principal Components Analysis.

MiniBatchSparsePCA

Mini-batch Sparse Principal Components Analysis.