.. DO NOT EDIT.
.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
.. "auto_examples/calibration/plot_calibration_multiclass.py"
.. LINE NUMBERS ARE GIVEN BELOW.

.. only:: html

    .. note::
        :class: sphx-glr-download-link-note

        :ref:`Go to the end <sphx_glr_download_auto_examples_calibration_plot_calibration_multiclass.py>`
        to download the full example code or to run this example in your browser via JupyterLite or Binder

.. rst-class:: sphx-glr-example-title

.. _sphx_glr_auto_examples_calibration_plot_calibration_multiclass.py:


==================================================
Probability Calibration for 3-class classification
==================================================

This example illustrates how sigmoid :ref:`calibration <calibration>` changes
predicted probabilities for a 3-class classification problem. Illustrated is
the standard 2-simplex, where the three corners correspond to the three
classes. Arrows point from the probability vectors predicted by an uncalibrated
classifier to the probability vectors predicted by the same classifier after
sigmoid calibration on a hold-out validation set. Colors indicate the true
class of an instance (red: class 1, green: class 2, blue: class 3).

.. GENERATED FROM PYTHON SOURCE LINES 17-29

Data
----
Below, we generate a classification dataset with 2000 samples, 2 features
and 3 target classes. We then split the data as follows:

* train: 600 samples (for training the classifier)
* valid: 400 samples (for calibrating predicted probabilities)
* test: 1000 samples

Note that we also create `X_train_valid` and `y_train_valid`, which consists
of both the train and valid subsets. This is used when we only want to train
the classifier but not calibrate the predicted probabilities.

.. GENERATED FROM PYTHON SOURCE LINES 29-47

.. code-block:: default


    # Author: Jan Hendrik Metzen <jhm@informatik.uni-bremen.de>
    # License: BSD Style.

    import numpy as np

    from sklearn.datasets import make_blobs

    np.random.seed(0)

    X, y = make_blobs(
        n_samples=2000, n_features=2, centers=3, random_state=42, cluster_std=5.0
    )
    X_train, y_train = X[:600], y[:600]
    X_valid, y_valid = X[600:1000], y[600:1000]
    X_train_valid, y_train_valid = X[:1000], y[:1000]
    X_test, y_test = X[1000:], y[1000:]








.. GENERATED FROM PYTHON SOURCE LINES 48-54

Fitting and calibration
-----------------------

First, we will train a :class:`~sklearn.ensemble.RandomForestClassifier`
with 25 base estimators (trees) on the concatenated train and validation
data (1000 samples). This is the uncalibrated classifier.

.. GENERATED FROM PYTHON SOURCE LINES 54-60

.. code-block:: default


    from sklearn.ensemble import RandomForestClassifier

    clf = RandomForestClassifier(n_estimators=25)
    clf.fit(X_train_valid, y_train_valid)






.. raw:: html

    <div class="output_subarea output_html rendered_html output_result">
    <style>#sk-container-id-6 {color: black;}#sk-container-id-6 pre{padding: 0;}#sk-container-id-6 div.sk-toggleable {background-color: white;}#sk-container-id-6 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-6 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-6 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-6 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-6 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-6 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-6 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-6 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-6 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-6 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-6 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-6 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-6 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-6 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-6 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-6 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-6 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-6 div.sk-item {position: relative;z-index: 1;}#sk-container-id-6 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-6 div.sk-item::before, #sk-container-id-6 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-6 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-6 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-6 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-6 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-6 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-6 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-6 div.sk-label-container {text-align: center;}#sk-container-id-6 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-6 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-6" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>RandomForestClassifier(n_estimators=25)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-24" type="checkbox" checked><label for="sk-estimator-id-24" class="sk-toggleable__label sk-toggleable__label-arrow">RandomForestClassifier</label><div class="sk-toggleable__content"><pre>RandomForestClassifier(n_estimators=25)</pre></div></div></div></div></div>
    </div>
    <br />
    <br />

.. GENERATED FROM PYTHON SOURCE LINES 61-65

To train the calibrated classifier, we start with the same
:class:`~sklearn.ensemble.RandomForestClassifier` but train it using only
the train data subset (600 samples) then calibrate, with `method='sigmoid'`,
using the valid data subset (400 samples) in a 2-stage process.

.. GENERATED FROM PYTHON SOURCE LINES 65-73

.. code-block:: default


    from sklearn.calibration import CalibratedClassifierCV

    clf = RandomForestClassifier(n_estimators=25)
    clf.fit(X_train, y_train)
    cal_clf = CalibratedClassifierCV(clf, method="sigmoid", cv="prefit")
    cal_clf.fit(X_valid, y_valid)






.. raw:: html

    <div class="output_subarea output_html rendered_html output_result">
    <style>#sk-container-id-7 {color: black;}#sk-container-id-7 pre{padding: 0;}#sk-container-id-7 div.sk-toggleable {background-color: white;}#sk-container-id-7 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-7 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-7 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-7 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-7 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-7 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-7 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-7 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-7 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-7 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-7 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-7 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-7 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-7 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-7 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-7 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-7 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-7 div.sk-item {position: relative;z-index: 1;}#sk-container-id-7 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-7 div.sk-item::before, #sk-container-id-7 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-7 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-7 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-7 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-7 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-7 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-7 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-7 div.sk-label-container {text-align: center;}#sk-container-id-7 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-7 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-7" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>CalibratedClassifierCV(cv=&#x27;prefit&#x27;,
                           estimator=RandomForestClassifier(n_estimators=25))</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-25" type="checkbox" ><label for="sk-estimator-id-25" class="sk-toggleable__label sk-toggleable__label-arrow">CalibratedClassifierCV</label><div class="sk-toggleable__content"><pre>CalibratedClassifierCV(cv=&#x27;prefit&#x27;,
                           estimator=RandomForestClassifier(n_estimators=25))</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-26" type="checkbox" ><label for="sk-estimator-id-26" class="sk-toggleable__label sk-toggleable__label-arrow">estimator: RandomForestClassifier</label><div class="sk-toggleable__content"><pre>RandomForestClassifier(n_estimators=25)</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-27" type="checkbox" ><label for="sk-estimator-id-27" class="sk-toggleable__label sk-toggleable__label-arrow">RandomForestClassifier</label><div class="sk-toggleable__content"><pre>RandomForestClassifier(n_estimators=25)</pre></div></div></div></div></div></div></div></div></div></div>
    </div>
    <br />
    <br />

.. GENERATED FROM PYTHON SOURCE LINES 74-78

Compare probabilities
---------------------
Below we plot a 2-simplex with arrows showing the change in predicted
probabilities of the test samples.

.. GENERATED FROM PYTHON SOURCE LINES 78-184

.. code-block:: default


    import matplotlib.pyplot as plt

    plt.figure(figsize=(10, 10))
    colors = ["r", "g", "b"]

    clf_probs = clf.predict_proba(X_test)
    cal_clf_probs = cal_clf.predict_proba(X_test)
    # Plot arrows
    for i in range(clf_probs.shape[0]):
        plt.arrow(
            clf_probs[i, 0],
            clf_probs[i, 1],
            cal_clf_probs[i, 0] - clf_probs[i, 0],
            cal_clf_probs[i, 1] - clf_probs[i, 1],
            color=colors[y_test[i]],
            head_width=1e-2,
        )

    # Plot perfect predictions, at each vertex
    plt.plot([1.0], [0.0], "ro", ms=20, label="Class 1")
    plt.plot([0.0], [1.0], "go", ms=20, label="Class 2")
    plt.plot([0.0], [0.0], "bo", ms=20, label="Class 3")

    # Plot boundaries of unit simplex
    plt.plot([0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], "k", label="Simplex")

    # Annotate points 6 points around the simplex, and mid point inside simplex
    plt.annotate(
        r"($\frac{1}{3}$, $\frac{1}{3}$, $\frac{1}{3}$)",
        xy=(1.0 / 3, 1.0 / 3),
        xytext=(1.0 / 3, 0.23),
        xycoords="data",
        arrowprops=dict(facecolor="black", shrink=0.05),
        horizontalalignment="center",
        verticalalignment="center",
    )
    plt.plot([1.0 / 3], [1.0 / 3], "ko", ms=5)
    plt.annotate(
        r"($\frac{1}{2}$, $0$, $\frac{1}{2}$)",
        xy=(0.5, 0.0),
        xytext=(0.5, 0.1),
        xycoords="data",
        arrowprops=dict(facecolor="black", shrink=0.05),
        horizontalalignment="center",
        verticalalignment="center",
    )
    plt.annotate(
        r"($0$, $\frac{1}{2}$, $\frac{1}{2}$)",
        xy=(0.0, 0.5),
        xytext=(0.1, 0.5),
        xycoords="data",
        arrowprops=dict(facecolor="black", shrink=0.05),
        horizontalalignment="center",
        verticalalignment="center",
    )
    plt.annotate(
        r"($\frac{1}{2}$, $\frac{1}{2}$, $0$)",
        xy=(0.5, 0.5),
        xytext=(0.6, 0.6),
        xycoords="data",
        arrowprops=dict(facecolor="black", shrink=0.05),
        horizontalalignment="center",
        verticalalignment="center",
    )
    plt.annotate(
        r"($0$, $0$, $1$)",
        xy=(0, 0),
        xytext=(0.1, 0.1),
        xycoords="data",
        arrowprops=dict(facecolor="black", shrink=0.05),
        horizontalalignment="center",
        verticalalignment="center",
    )
    plt.annotate(
        r"($1$, $0$, $0$)",
        xy=(1, 0),
        xytext=(1, 0.1),
        xycoords="data",
        arrowprops=dict(facecolor="black", shrink=0.05),
        horizontalalignment="center",
        verticalalignment="center",
    )
    plt.annotate(
        r"($0$, $1$, $0$)",
        xy=(0, 1),
        xytext=(0.1, 1),
        xycoords="data",
        arrowprops=dict(facecolor="black", shrink=0.05),
        horizontalalignment="center",
        verticalalignment="center",
    )
    # Add grid
    plt.grid(False)
    for x in [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
        plt.plot([0, x], [x, 0], "k", alpha=0.2)
        plt.plot([0, 0 + (1 - x) / 2], [x, x + (1 - x) / 2], "k", alpha=0.2)
        plt.plot([x, x + (1 - x) / 2], [0, 0 + (1 - x) / 2], "k", alpha=0.2)

    plt.title("Change of predicted probabilities on test samples after sigmoid calibration")
    plt.xlabel("Probability class 1")
    plt.ylabel("Probability class 2")
    plt.xlim(-0.05, 1.05)
    plt.ylim(-0.05, 1.05)
    _ = plt.legend(loc="best")




.. image-sg:: /auto_examples/calibration/images/sphx_glr_plot_calibration_multiclass_001.png
   :alt: Change of predicted probabilities on test samples after sigmoid calibration
   :srcset: /auto_examples/calibration/images/sphx_glr_plot_calibration_multiclass_001.png
   :class: sphx-glr-single-img





.. GENERATED FROM PYTHON SOURCE LINES 185-211

In the figure above, each vertex of the simplex represents
a perfectly predicted class (e.g., 1, 0, 0). The mid point
inside the simplex represents predicting the three classes with equal
probability (i.e., 1/3, 1/3, 1/3). Each arrow starts at the
uncalibrated probabilities and end with the arrow head at the calibrated
probability. The color of the arrow represents the true class of that test
sample.

The uncalibrated classifier is overly confident in its predictions and
incurs a large :ref:`log loss <log_loss>`. The calibrated classifier incurs
a lower :ref:`log loss <log_loss>` due to two factors. First, notice in the
figure above that the arrows generally point away from the edges of the
simplex, where the probability of one class is 0. Second, a large proportion
of the arrows point towards the true class, e.g., green arrows (samples where
the true class is 'green') generally point towards the green vertex. This
results in fewer over-confident, 0 predicted probabilities and at the same
time an increase in the predicted probabilities of the correct class.
Thus, the calibrated classifier produces more accurate predicted probabilities
that incur a lower :ref:`log loss <log_loss>`

We can show this objectively by comparing the :ref:`log loss <log_loss>` of
the uncalibrated and calibrated classifiers on the predictions of the 1000
test samples. Note that an alternative would have been to increase the number
of base estimators (trees) of the
:class:`~sklearn.ensemble.RandomForestClassifier` which would have resulted
in a similar decrease in :ref:`log loss <log_loss>`.

.. GENERATED FROM PYTHON SOURCE LINES 211-221

.. code-block:: default


    from sklearn.metrics import log_loss

    score = log_loss(y_test, clf_probs)
    cal_score = log_loss(y_test, cal_clf_probs)

    print("Log-loss of")
    print(f" * uncalibrated classifier: {score:.3f}")
    print(f" * calibrated classifier: {cal_score:.3f}")





.. rst-class:: sphx-glr-script-out

 .. code-block:: none

    Log-loss of
     * uncalibrated classifier: 1.327
     * calibrated classifier: 0.549




.. GENERATED FROM PYTHON SOURCE LINES 222-226

Finally we generate a grid of possible uncalibrated probabilities over
the 2-simplex, compute the corresponding calibrated probabilities and
plot arrows for each. The arrows are colored according the highest
uncalibrated probability. This illustrates the learned calibration map:

.. GENERATED FROM PYTHON SOURCE LINES 226-276

.. code-block:: default


    plt.figure(figsize=(10, 10))
    # Generate grid of probability values
    p1d = np.linspace(0, 1, 20)
    p0, p1 = np.meshgrid(p1d, p1d)
    p2 = 1 - p0 - p1
    p = np.c_[p0.ravel(), p1.ravel(), p2.ravel()]
    p = p[p[:, 2] >= 0]

    # Use the three class-wise calibrators to compute calibrated probabilities
    calibrated_classifier = cal_clf.calibrated_classifiers_[0]
    prediction = np.vstack(
        [
            calibrator.predict(this_p)
            for calibrator, this_p in zip(calibrated_classifier.calibrators, p.T)
        ]
    ).T

    # Re-normalize the calibrated predictions to make sure they stay inside the
    # simplex. This same renormalization step is performed internally by the
    # predict method of CalibratedClassifierCV on multiclass problems.
    prediction /= prediction.sum(axis=1)[:, None]

    # Plot changes in predicted probabilities induced by the calibrators
    for i in range(prediction.shape[0]):
        plt.arrow(
            p[i, 0],
            p[i, 1],
            prediction[i, 0] - p[i, 0],
            prediction[i, 1] - p[i, 1],
            head_width=1e-2,
            color=colors[np.argmax(p[i])],
        )

    # Plot the boundaries of the unit simplex
    plt.plot([0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], "k", label="Simplex")

    plt.grid(False)
    for x in [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
        plt.plot([0, x], [x, 0], "k", alpha=0.2)
        plt.plot([0, 0 + (1 - x) / 2], [x, x + (1 - x) / 2], "k", alpha=0.2)
        plt.plot([x, x + (1 - x) / 2], [0, 0 + (1 - x) / 2], "k", alpha=0.2)

    plt.title("Learned sigmoid calibration map")
    plt.xlabel("Probability class 1")
    plt.ylabel("Probability class 2")
    plt.xlim(-0.05, 1.05)
    plt.ylim(-0.05, 1.05)

    plt.show()



.. image-sg:: /auto_examples/calibration/images/sphx_glr_plot_calibration_multiclass_002.png
   :alt: Learned sigmoid calibration map
   :srcset: /auto_examples/calibration/images/sphx_glr_plot_calibration_multiclass_002.png
   :class: sphx-glr-single-img






.. rst-class:: sphx-glr-timing

   **Total running time of the script:** (0 minutes 1.487 seconds)


.. _sphx_glr_download_auto_examples_calibration_plot_calibration_multiclass.py:

.. only:: html

  .. container:: sphx-glr-footer sphx-glr-footer-example


    .. container:: binder-badge

      .. image:: images/binder_badge_logo.svg
        :target: https://mybinder.org/v2/gh/scikit-learn/scikit-learn/1.3.X?urlpath=lab/tree/notebooks/auto_examples/calibration/plot_calibration_multiclass.ipynb
        :alt: Launch binder
        :width: 150 px



    .. container:: lite-badge

      .. image:: images/jupyterlite_badge_logo.svg
        :target: ../../lite/lab/?path=auto_examples/calibration/plot_calibration_multiclass.ipynb
        :alt: Launch JupyterLite
        :width: 150 px

    .. container:: sphx-glr-download sphx-glr-download-python

      :download:`Download Python source code: plot_calibration_multiclass.py <plot_calibration_multiclass.py>`

    .. container:: sphx-glr-download sphx-glr-download-jupyter

      :download:`Download Jupyter notebook: plot_calibration_multiclass.ipynb <plot_calibration_multiclass.ipynb>`


.. only:: html

 .. rst-class:: sphx-glr-signature

    `Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_