sklearn.metrics
.label_ranking_average_precision_score¶
- sklearn.metrics.label_ranking_average_precision_score(y_true, y_score, *, sample_weight=None)[source]¶
Compute ranking-based average precision.
Label ranking average precision (LRAP) is the average over each ground truth label assigned to each sample, of the ratio of true vs. total labels with lower score.
This metric is used in multilabel ranking problem, where the goal is to give better rank to the labels associated to each sample.
The obtained score is always strictly greater than 0 and the best value is 1.
Read more in the User Guide.
- Parameters:
- y_true{array-like, sparse matrix} of shape (n_samples, n_labels)
True binary labels in binary indicator format.
- y_scorearray-like of shape (n_samples, n_labels)
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers).
- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
New in version 0.20.
- Returns:
- scorefloat
Ranking-based average precision score.
Examples
>>> import numpy as np >>> from sklearn.metrics import label_ranking_average_precision_score >>> y_true = np.array([[1, 0, 0], [0, 0, 1]]) >>> y_score = np.array([[0.75, 0.5, 1], [1, 0.2, 0.1]]) >>> label_ranking_average_precision_score(y_true, y_score) 0.416...