balanced_accuracy_score#

sklearn.metrics.balanced_accuracy_score(y_true, y_pred, *, sample_weight=None, adjusted=False)[source]#

Compute the balanced accuracy.

The balanced accuracy in binary and multiclass classification problems to deal with imbalanced datasets. It is defined as the average of recall obtained on each class.

The best value is 1 and the worst value is 0 when adjusted=False.

Read more in the User Guide.

Added in version 0.20.

Parameters:
y_truearray-like of shape (n_samples,)

Ground truth (correct) target values.

y_predarray-like of shape (n_samples,)

Estimated targets as returned by a classifier.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

adjustedbool, default=False

When true, the result is adjusted for chance, so that random performance would score 0, while keeping perfect performance at a score of 1.

Returns:
balanced_accuracyfloat

Balanced accuracy score.

See also

average_precision_score

Compute average precision (AP) from prediction scores.

precision_score

Compute the precision score.

recall_score

Compute the recall score.

roc_auc_score

Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores.

Notes

Some literature promotes alternative definitions of balanced accuracy. Our definition is equivalent to accuracy_score with class-balanced sample weights, and shares desirable properties with the binary case. See the User Guide.

References

[1]

Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. (2010). The balanced accuracy and its posterior distribution. Proceedings of the 20th International Conference on Pattern Recognition, 3121-24.

Examples

>>> from sklearn.metrics import balanced_accuracy_score
>>> y_true = [0, 1, 0, 0, 1, 0]
>>> y_pred = [0, 1, 0, 0, 0, 1]
>>> balanced_accuracy_score(y_true, y_pred)
np.float64(0.625)