{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Probability Calibration curves\n\n\nWhen performing classification one often wants to predict not only the class\nlabel, but also the associated probability. This probability gives some\nkind of confidence on the prediction. This example demonstrates how to display\nhow well calibrated the predicted probabilities are and how to calibrate an\nuncalibrated classifier.\n\nThe experiment is performed on an artificial dataset for binary classification\nwith 100,000 samples (1,000 of them are used for model fitting) with 20\nfeatures. Of the 20 features, only 2 are informative and 10 are redundant. The\nfirst figure shows the estimated probabilities obtained with logistic\nregression, Gaussian naive Bayes, and Gaussian naive Bayes with both isotonic\ncalibration and sigmoid calibration. The calibration performance is evaluated\nwith Brier score, reported in the legend (the smaller the better). One can\nobserve here that logistic regression is well calibrated while raw Gaussian\nnaive Bayes performs very badly. This is because of the redundant features\nwhich violate the assumption of feature-independence and result in an overly\nconfident classifier, which is indicated by the typical transposed-sigmoid\ncurve.\n\nCalibration of the probabilities of Gaussian naive Bayes with isotonic\nregression can fix this issue as can be seen from the nearly diagonal\ncalibration curve. Sigmoid calibration also improves the brier score slightly,\nalbeit not as strongly as the non-parametric isotonic regression. This can be\nattributed to the fact that we have plenty of calibration data such that the\ngreater flexibility of the non-parametric model can be exploited.\n\nThe second figure shows the calibration curve of a linear support-vector\nclassifier (LinearSVC). LinearSVC shows the opposite behavior as Gaussian\nnaive Bayes: the calibration curve has a sigmoid curve, which is typical for\nan under-confident classifier. In the case of LinearSVC, this is caused by the\nmargin property of the hinge loss, which lets the model focus on hard samples\nthat are close to the decision boundary (the support vectors).\n\nBoth kinds of calibration can fix this issue and yield nearly identical\nresults. This shows that sigmoid calibration can deal with situations where\nthe calibration curve of the base classifier is sigmoid (e.g., for LinearSVC)\nbut not where it is transposed-sigmoid (e.g., Gaussian naive Bayes).\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(__doc__)\n\n# Author: Alexandre Gramfort \n# Jan Hendrik Metzen \n# License: BSD Style.\n\nimport matplotlib.pyplot as plt\n\nfrom sklearn import datasets\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.svm import LinearSVC\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import (brier_score_loss, precision_score, recall_score,\n f1_score)\nfrom sklearn.calibration import CalibratedClassifierCV, calibration_curve\nfrom sklearn.model_selection import train_test_split\n\n\n# Create dataset of classification task with many redundant and few\n# informative features\nX, y = datasets.make_classification(n_samples=100000, n_features=20,\n n_informative=2, n_redundant=10,\n random_state=42)\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.99,\n random_state=42)\n\n\ndef plot_calibration_curve(est, name, fig_index):\n \"\"\"Plot calibration curve for est w/o and with calibration. \"\"\"\n # Calibrated with isotonic calibration\n isotonic = CalibratedClassifierCV(est, cv=2, method='isotonic')\n\n # Calibrated with sigmoid calibration\n sigmoid = CalibratedClassifierCV(est, cv=2, method='sigmoid')\n\n # Logistic regression with no calibration as baseline\n lr = LogisticRegression(C=1.)\n\n fig = plt.figure(fig_index, figsize=(10, 10))\n ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)\n ax2 = plt.subplot2grid((3, 1), (2, 0))\n\n ax1.plot([0, 1], [0, 1], \"k:\", label=\"Perfectly calibrated\")\n for clf, name in [(lr, 'Logistic'),\n (est, name),\n (isotonic, name + ' + Isotonic'),\n (sigmoid, name + ' + Sigmoid')]:\n clf.fit(X_train, y_train)\n y_pred = clf.predict(X_test)\n if hasattr(clf, \"predict_proba\"):\n prob_pos = clf.predict_proba(X_test)[:, 1]\n else: # use decision function\n prob_pos = clf.decision_function(X_test)\n prob_pos = \\\n (prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())\n\n clf_score = brier_score_loss(y_test, prob_pos, pos_label=y.max())\n print(\"%s:\" % name)\n print(\"\\tBrier: %1.3f\" % (clf_score))\n print(\"\\tPrecision: %1.3f\" % precision_score(y_test, y_pred))\n print(\"\\tRecall: %1.3f\" % recall_score(y_test, y_pred))\n print(\"\\tF1: %1.3f\\n\" % f1_score(y_test, y_pred))\n\n fraction_of_positives, mean_predicted_value = \\\n calibration_curve(y_test, prob_pos, n_bins=10)\n\n ax1.plot(mean_predicted_value, fraction_of_positives, \"s-\",\n label=\"%s (%1.3f)\" % (name, clf_score))\n\n ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,\n histtype=\"step\", lw=2)\n\n ax1.set_ylabel(\"Fraction of positives\")\n ax1.set_ylim([-0.05, 1.05])\n ax1.legend(loc=\"lower right\")\n ax1.set_title('Calibration plots (reliability curve)')\n\n ax2.set_xlabel(\"Mean predicted value\")\n ax2.set_ylabel(\"Count\")\n ax2.legend(loc=\"upper center\", ncol=2)\n\n plt.tight_layout()\n\n# Plot calibration curve for Gaussian Naive Bayes\nplot_calibration_curve(GaussianNB(), \"Naive Bayes\", 1)\n\n# Plot calibration curve for Linear SVC\nplot_calibration_curve(LinearSVC(max_iter=10000), \"SVC\", 2)\n\nplt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
}
},
"nbformat": 4,
"nbformat_minor": 0
}