learning_curve(estimator, X, y, *, groups=None, train_sizes=array([0.1, 0.33, 0.55, 0.78, 1.0]), cv=None, scoring=None, exploit_incremental_learning=False, n_jobs=None, pre_dispatch='all', verbose=0, shuffle=False, random_state=None, error_score=nan, return_times=False)¶
Determines cross-validated training and test scores for different training set sizes.
A cross-validation generator splits the whole dataset k times in training and test data. Subsets of the training set with varying sizes will be used to train the estimator and a score for each training subset size and the test set will be computed. Afterwards, the scores will be averaged over all k runs for each training subset size.
Read more in the User Guide.
- estimatorobject type that implements the “fit” and “predict” methods
An object of that type which is cloned for each validation.
- Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
Target relative to X for classification or regression; None for unsupervised learning.
- groupsarray-like of shape (n_samples,), default=None
- train_sizesarray-like of shape (n_ticks,), default=np.linspace(0.1, 1.0, 5)
Relative or absolute numbers of training examples that will be used to generate the learning curve. If the dtype is float, it is regarded as a fraction of the maximum size of the training set (that is determined by the selected validation method), i.e. it has to be within (0, 1]. Otherwise it is interpreted as absolute sizes of the training sets. Note that for classification the number of samples usually have to be big enough to contain at least one sample from each class.
- cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are:
None, to use the default 5-fold cross validation,
int, to specify the number of folds in a
An iterable yielding (train, test) splits as arrays of indices.
Refer User Guide for the various cross-validation strategies that can be used here.
Changed in version 0.22:
cvdefault value if None changed from 3-fold to 5-fold.
- scoringstr or callable, default=None
A str (see model evaluation documentation) or a scorer callable object / function with signature
scorer(estimator, X, y).
- exploit_incremental_learningbool, default=False
If the estimator supports incremental learning, this will be used to speed up fitting for different training set sizes.
- n_jobsint, default=None
Number of jobs to run in parallel. Training the estimator and computing the score are parallelized over the different training and test sets.
Nonemeans 1 unless in a
-1means using all processors. See Glossary for more details.
- pre_dispatchint or str, default=’all’
Number of predispatched jobs for parallel execution (default is all). The option can reduce the allocated memory. The str can be an expression like ‘2*n_jobs’.
- verboseint, default=0
Controls the verbosity: the higher, the more messages.
- shufflebool, default=False
Whether to shuffle training data before taking prefixes of it based on``train_sizes``.
- random_stateint or RandomState instance, default=None
shuffleis True. Pass an int for reproducible output across multiple function calls. See Glossary.
- error_score‘raise’ or numeric, default=np.nan
New in version 0.20.
- return_timesbool, default=False
Whether to return the fit and score times.
- train_sizes_absarray of shape (n_unique_ticks,)
Numbers of training examples that has been used to generate the learning curve. Note that the number of ticks might be less than n_ticks because duplicate entries will be removed.
- train_scoresarray of shape (n_ticks, n_cv_folds)
Scores on training sets.
- test_scoresarray of shape (n_ticks, n_cv_folds)
Scores on test set.
- fit_timesarray of shape (n_ticks, n_cv_folds)
Times spent for fitting in seconds. Only present if
- score_timesarray of shape (n_ticks, n_cv_folds)
Times spent for scoring in seconds. Only present if