sklearn.metrics.coverage_error

sklearn.metrics.coverage_error(y_true, y_score, *, sample_weight=None)[source]

Coverage error measure.

Compute how far we need to go through the ranked scores to cover all true labels. The best value is equal to the average number of labels in y_true per sample.

Ties in y_scores are broken by giving maximal rank that would have been assigned to all tied values.

Note: Our implementation’s score is 1 greater than the one given in Tsoumakas et al., 2010. This extends it to handle the degenerate case in which an instance has 0 true labels.

Read more in the User Guide.

Parameters:
y_truearray-like of shape (n_samples, n_labels)

True binary labels in binary indicator format.

y_scorearray-like of shape (n_samples, n_labels)

Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers).

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns:
coverage_errorfloat

The coverage error.

References

[1]

Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.

Examples

>>> from sklearn.metrics import coverage_error
>>> y_true = [[1, 0, 0], [0, 1, 1]]
>>> y_score = [[1, 0, 0], [0, 1, 1]]
>>> coverage_error(y_true, y_score)
1.5