- sklearn.metrics.recall_score(y_true, y_pred, labels=None, pos_label=1, average='weighted', sample_weight=None)¶
Compute the recall
The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples.
The best value is 1 and the worst value is 0.
y_true : array-like or label indicator matrix
Ground truth (correct) target values.
y_pred : array-like or label indicator matrix
Estimated targets as returned by a classifier.
labels : array
Integer array of labels.
pos_label : str or int, 1 by default
If average is not None and the classification target is binary, only this class’s scores will be returned.
average : string, [None, ‘micro’, ‘macro’, ‘samples’, ‘weighted’ (default)]
If None, the scores for each class are returned. Otherwise, unless pos_label is given in binary classification, this determines the type of averaging performed on the data:
Calculate metrics globally by counting the total true positives, false negatives and false positives.
Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall.
Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score).
sample_weight : array-like of shape = [n_samples], optional
recall : float (if average is not None) or array of float, shape = [n_unique_labels]
Recall of the positive class in binary classification or weighted average of the recall of each class for the multiclass task.
>>> from sklearn.metrics import recall_score >>> y_true = [0, 1, 2, 0, 1, 2] >>> y_pred = [0, 2, 1, 0, 0, 1] >>> recall_score(y_true, y_pred, average='macro') 0.33... >>> recall_score(y_true, y_pred, average='micro') 0.33... >>> recall_score(y_true, y_pred, average='weighted') 0.33... >>> recall_score(y_true, y_pred, average=None) array([ 1., 0., 0.])