check_estimator#
- sklearn.utils.estimator_checks.check_estimator(estimator=None, generate_only=False, *, legacy: bool = True, expected_failed_checks: dict[str, str] | None = None, on_skip: Literal['warn'] | None = 'warn', on_fail: Literal['raise', 'warn'] | None = 'raise', callback: Callable | None = None)[source]#
Check if estimator adheres to scikit-learn conventions.
This function will run an extensive test-suite for input validation, shapes, etc, making sure that the estimator complies with
scikit-learn
conventions as detailed in Rolling your own estimator. Additional tests for classifiers, regressors, clustering or transformers will be run if the Estimator class inherits from the corresponding mixin from sklearn.base.scikit-learn also provides a pytest specific decorator,
parametrize_with_checks
, making it easier to test multiple estimators.Checks are categorised into the following groups:
API checks: a set of checks to ensure API compatibility with scikit-learn. Refer to https://scikit-learn.org/dev/developers/develop.html a requirement of scikit-learn estimators.
legacy: a set of checks which gradually will be grouped into other categories.
- Parameters:
- estimatorestimator object
Estimator instance to check.
- generate_onlybool, default=False
When
False
, checks are evaluated whencheck_estimator
is called. WhenTrue
,check_estimator
returns a generator that yields (estimator, check) tuples. The check is run by callingcheck(estimator)
.Added in version 0.22.
Deprecated since version 1.6:
generate_only
will be removed in 1.8. Useestimator_checks_generator
instead.- legacybool, default=True
Whether to include legacy checks. Over time we remove checks from this category and move them into their specific category.
Added in version 1.6.
- expected_failed_checksdict, default=None
A dictionary of the form:
{ "check_name": "this check is expected to fail because ...", }
Where
"check_name"
is the name of the check, and"my reason"
is why the check fails.Added in version 1.6.
- on_skip“warn”, None, default=”warn”
This parameter controls what happens when a check is skipped.
“warn”: A
SkipTestWarning
is logged and running tests continue.None: No warning is logged and running tests continue.
Added in version 1.6.
- on_fail{“raise”, “warn”}, None, default=”raise”
This parameter controls what happens when a check fails.
“raise”: The exception raised by the first failing check is raised and running tests are aborted. This does not included tests that are expected to fail.
“warn”: A
EstimatorCheckFailedWarning
is logged and running tests continue.None: No exception is raised and no warning is logged.
Note that if
on_fail != "raise"
, no exception is raised, even if the checks fail. You’d need to inspect the return result ofcheck_estimator
to check if any checks failed.Added in version 1.6.
- callbackcallable, or None, default=None
This callback will be called with the estimator and the check name, the exception (if any), the status of the check (xfail, failed, skipped, passed), and the reason for the expected failure if the check is expected to fail. The callable’s signature needs to be:
def callback( estimator, check_name: str, exception: Exception, status: Literal["xfail", "failed", "skipped", "passed"], expected_to_fail: bool, expected_to_fail_reason: str, )
callback
cannot be provided together withon_fail="raise"
.Added in version 1.6.
- Returns:
- test_resultslist
List of dictionaries with the results of the failing tests, of the form:
{ "estimator": estimator, "check_name": check_name, "exception": exception, "status": status (one of "xfail", "failed", "skipped", "passed"), "expected_to_fail": expected_to_fail, "expected_to_fail_reason": expected_to_fail_reason, }
- estimator_checks_generatorgenerator
Generator that yields (estimator, check) tuples. Returned when
generate_only=True
.Deprecated since version 1.6:
generate_only
will be removed in 1.8. Useestimator_checks_generator
instead.
- Raises:
- Exception
If
on_fail="raise"
, the exception raised by the first failing check is raised and running tests are aborted.Note that if
on_fail != "raise"
, no exception is raised, even if the checks fail. You’d need to inspect the return result ofcheck_estimator
to check if any checks failed.
See also
parametrize_with_checks
Pytest specific decorator for parametrizing estimator checks.
estimator_checks_generator
Generator that yields (estimator, check) tuples.
Examples
>>> from sklearn.utils.estimator_checks import check_estimator >>> from sklearn.linear_model import LogisticRegression >>> check_estimator(LogisticRegression()) [...]
Gallery examples#
Release Highlights for scikit-learn 1.6