# sklearn.utils.parallel.Parallel¶

class sklearn.utils.parallel.Parallel(n_jobs=None, backend=None, verbose=0, timeout=None, pre_dispatch='2 * n_jobs', batch_size='auto', temp_folder=None, max_nbytes='1M', mmap_mode='r', prefer=None, require=None)[source]

Tweak of joblib.Parallel that propagates the scikit-learn configuration.

This subclass of joblib.Parallel ensures that the active configuration (thread-local) of scikit-learn is propagated to the parallel workers for the duration of the execution of the parallel tasks.

The API does not change and you can refer to joblib.Parallel documentation for more details.

New in version 1.3.

Methods

 __call__(iterable) Dispatch the tasks and return the results. Dispatch more data for parallel processing dispatch_one_batch(iterator) Prefetch the tasks for the next batch and dispatch them. format(obj[, indent]) Return the formatted representation of the object. Display the process of the parallel execution only a fraction of time, controlled by self.verbose.
 debug retrieve warn
dispatch_next()[source]

Dispatch more data for parallel processing

This method is meant to be called concurrently by the multiprocessing callback. We rely on the thread-safety of dispatch_one_batch to protect against concurrent consumption of the unprotected iterator.

dispatch_one_batch(iterator)[source]

Prefetch the tasks for the next batch and dispatch them.

The effective size of the batch is computed here. If there are no more jobs to dispatch, return False, else return True.

The iterator consumption and dispatching is protected by the same lock so calling this function should be thread safe.

format(obj, indent=0)[source]

Return the formatted representation of the object.

print_progress()[source]

Display the process of the parallel execution only a fraction of time, controlled by self.verbose.