MNIST Classification#

In this notebook we show how to use Fortuna to obtain calibrated uncertainty estimates of predictions in an MNIST classification task, starting from scratch. In the last section of this example shows how this could have been done starting directly from outputs of a pre-trained model.

Download MNIST data from TensorFlow#

Let us first download the MNIST data from TensorFlow Datasets. Other sources would be equivalently fine.

[1]:
import tensorflow as tf
import tensorflow_datasets as tfds


def download(split_range, shuffle=False):
    ds = tfds.load(
        name="MNIST",
        split=f"train[{split_range}]",
        as_supervised=True,
        shuffle_files=True,
    ).map(lambda x, y: (tf.cast(x, tf.float32) / 255.0, y))
    if shuffle:
        ds = ds.shuffle(10, reshuffle_each_iteration=True)
    return ds.batch(128).prefetch(1)


train_data_loader, val_data_loader, test_data_loader = (
    download(":80%", shuffle=True),
    download("80%:90%"),
    download("90%:"),
)
2024-03-08 15:53:54.727902: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
/home/docs/checkouts/readthedocs.org/user_builds/aws-fortuna/envs/latest/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm
2024-03-08 15:53:57.446828: W tensorflow/tsl/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "NOT_FOUND: Could not locate the credentials file.". Retrieving token from GCE failed with "FAILED_PRECONDITION: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Could not resolve host: metadata.google.internal".
Downloading and preparing dataset 11.06 MiB (download: 11.06 MiB, generated: 21.00 MiB, total: 32.06 MiB) to /home/docs/tensorflow_datasets/mnist/3.0.1...
Dl Completed...: 100%|██████████| 5/5 [00:00<00:00,  7.02 file/s]
Dataset mnist downloaded and prepared to /home/docs/tensorflow_datasets/mnist/3.0.1. Subsequent calls will reuse this data.

Convert data to a compatible data loader#

Fortuna helps you converting data and data loaders into a data loader that Fortuna can digest.

[2]:
from fortuna.data import DataLoader

train_data_loader = DataLoader.from_tensorflow_data_loader(train_data_loader)
val_data_loader = DataLoader.from_tensorflow_data_loader(val_data_loader)
test_data_loader = DataLoader.from_tensorflow_data_loader(test_data_loader)

Build a probabilistic classifier#

Let us build a probabilistic classifier. This is an interface object containing several attributes that you can configure, i.e. model, prior, posterior_approximator, output_calibrator. In this example, we use a LeNet5 model, a Laplace posterior approximator acting on the last layer on the model, and the default temperature scaling output calibrator.

[3]:
from fortuna.prob_model import ProbClassifier, LaplacePosteriorApproximator
from fortuna.model import LeNet5

output_dim = 10
prob_model = ProbClassifier(
    model=LeNet5(output_dim=output_dim),
    posterior_approximator=LaplacePosteriorApproximator(),
)
WARNING:jax._src.xla_bridge:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
WARNING:root:No module named 'transformer' is installed. If you are not working with models from the `transformers` library ignore this warning, otherwise install the optional 'transformers' dependency of Fortuna using poetry. You can do so by entering: `poetry install --extras 'transformers'`.

Train the probabilistic model: posterior fitting and calibration#

We can now train the probabilistic model. This includes fitting the posterior distribution and calibrating the probabilistic model. As we are using a Laplace approximation, which start from a Maximum-A-Posteriori (MAP) approximation, we configure MAP via the argument map_fit_config.

[4]:
from fortuna.prob_model import (
    FitConfig,
    FitMonitor,
    FitOptimizer,
    CalibConfig,
    CalibMonitor,
)
from fortuna.metric.classification import accuracy

status = prob_model.train(
    train_data_loader=train_data_loader,
    val_data_loader=val_data_loader,
    calib_data_loader=val_data_loader,
    fit_config=FitConfig(
        optimizer=FitOptimizer(
            freeze_fun=lambda path, val: "trainable"
            if "output_subnet" in path
            else "frozen"
        )
    ),
    map_fit_config=FitConfig(
        monitor=FitMonitor(early_stopping_patience=2, metrics=(accuracy,)),
        optimizer=FitOptimizer(),
    ),
    calib_config=CalibConfig(monitor=CalibMonitor(early_stopping_patience=2)),
)
Epoch: 8 | loss: 41455.64453 | accuracy: 1.0:   7%|▋         | 7/100 [05:11<1:08:53, 44.44s/it]
Tuning prior log-var: 100%|██████████| 21/21 [00:41<00:00,  1.96s/it]
Epoch: 31 | loss: 138.72693:  30%|███       | 30/100 [00:15<00:35,  1.99it/s]

Estimate predictive statistics#

We can now compute some predictive statistics by invoking the predictive attribute of the probabilistic classifier, and the method of interest. Most predictive statistics, e.g. mean or mode, require a loader of input data points. You can easily get this from the data loader calling its method to_inputs_loader.

[5]:
test_log_probs = prob_model.predictive.log_prob(data_loader=test_data_loader)
test_inputs_loader = test_data_loader.to_inputs_loader()
test_means = prob_model.predictive.mean(inputs_loader=test_inputs_loader)
test_modes = prob_model.predictive.mode(
    inputs_loader=test_inputs_loader, means=test_means
)

Compute metrics#

In classification, the predictive mode is a prediction for labels, while the predictive mean is a prediction for the probability of each label. As such, we can use these to compute several metrics, e.g. the accuracy, the Brier score, the expected calibration error (ECE), etc.

[6]:
from fortuna.metric.classification import (
    accuracy,
    expected_calibration_error,
    brier_score,
)

test_targets = test_data_loader.to_array_targets()
acc = accuracy(preds=test_modes, targets=test_targets)
brier = brier_score(probs=test_means, targets=test_targets)
ece = expected_calibration_error(
    preds=test_modes,
    probs=test_means,
    targets=test_targets,
    plot=True,
    plot_options=dict(figsize=(10, 2)),
)
print(f"Test accuracy: {acc}")
print(f"Brier score: {brier}")
print(f"ECE: {ece}")
/home/docs/checkouts/readthedocs.org/user_builds/aws-fortuna/envs/latest/lib/python3.10/site-packages/jax/_src/numpy/lax_numpy.py:3613: UserWarning: 'kind' argument to argsort is ignored; only 'stable' sorts are supported.
  warnings.warn("'kind' argument to argsort is ignored; only 'stable' sorts "
Test accuracy: 0.9828333258628845
Brier score: 0.026212124153971672
ECE: 0.0021611435804516077
../_images/examples_mnist_classification_13_2.svg

Conformal prediction sets#

Fortuna allows to produce conformal prediction sets, that are sets of likely labels up to some coverage probability threshold. These can be computed starting from probability estimates obtained with or without Fortuna.

[7]:
from fortuna.conformal import AdaptivePredictionConformalClassifier

val_means = prob_model.predictive.mean(inputs_loader=val_data_loader.to_inputs_loader())
conformal_sets = AdaptivePredictionConformalClassifier().conformal_set(
    val_probs=val_means,
    test_probs=test_means,
    val_targets=val_data_loader.to_array_targets(),
    error=0.05,
)

We can check that, on average, conformal sets for misclassified inputs are larger than for well classified ones. This confirms the intuition that the model should be more uncertain when it is wrong.

[8]:
import numpy as np

avg_size = np.mean([len(s) for s in np.array(conformal_sets, dtype="object")])
avg_size_wellclassified = np.mean(
    [
        len(s)
        for s in np.array(conformal_sets, dtype="object")[test_modes == test_targets]
    ]
)
avg_size_misclassified = np.mean(
    [
        len(s)
        for s in np.array(conformal_sets, dtype="object")[test_modes != test_targets]
    ]
)
print(f"Average conformal set size: {avg_size}")
print(
    f"Average conformal set size over well classified input: {avg_size_wellclassified}"
)
print(f"Average conformal set size over misclassified input: {avg_size_misclassified}")
Average conformal set size: 9.9265
Average conformal set size over well classified input: 9.947091741563506
Average conformal set size over misclassified input: 8.74757281553398

What if we have model outputs to start from?#

If you have already trained an MNIST model and obtained model outputs, you can still use Fortuna to calibrate them, and estimate uncertainty. For educational purposes only, let us take the logarithm of the predictive mean estimated above as model outputs, and pretend these were generated with some other framework. Furthermore, we store arrays of validation and test target variables, and assume these were also given.

[9]:
import numpy as np

calib_outputs = np.log(val_means)
test_outputs = np.log(test_means)

calib_targets = val_data_loader.to_array_targets()
test_targets = test_data_loader.to_array_targets()

We now invoke a calibration classifier, with default temperature scaling output calibrator, and calibrate the model outputs.

[10]:
from fortuna.output_calib_model import OutputCalibClassifier

calib_model = OutputCalibClassifier()
calib_status = calib_model.calibrate(
    calib_outputs=calib_outputs, calib_targets=calib_targets
)
Epoch: 100 | loss: 0.02538: 100%|██████████| 100/100 [00:00<00:00, 180.01it/s]

Similarly as above, we can now compute predictive statistics.

[11]:
test_log_probs = calib_model.predictive.log_prob(
    outputs=test_outputs, targets=test_targets
)
test_means = calib_model.predictive.mean(outputs=test_outputs)
test_modes = calib_model.predictive.mode(outputs=test_outputs)

Then one can compute metrics and conformal intervals, exactly as done above.