diff --git a/doc/usage/index.rst b/doc/usage/index.rst
index de0ab1f287ed4ec39cf11c97d8389867aaa83d52..9e5090b78231f444bf02a8e6ec32003f5607aa41 100644
--- a/doc/usage/index.rst
+++ b/doc/usage/index.rst
@@ -84,6 +84,7 @@ Commands
   experiment
   training
   evaluation
+  saliency
 
 
 .. include:: ../links.rst
diff --git a/doc/usage/saliency.rst b/doc/usage/saliency.rst
new file mode 100644
index 0000000000000000000000000000000000000000..8eb3723c0f162ac453a8f331ed33c638ab7f44cc
--- /dev/null
+++ b/doc/usage/saliency.rst
@@ -0,0 +1,110 @@
+.. Copyright © 2023 Idiap Research Institute <contact@idiap.ch>
+..
+.. SPDX-License-Identifier: GPL-3.0-or-later
+
+.. _mednet.usage.saliency:
+
+==========
+ Saliency
+==========
+
+A saliency map highlights areas of interest within an image. In the context of TB detection, this would be the locations in a chest X-ray image where tuberculosis is present.
+
+This package provides scripts that can generate saliency maps and compute relevant metrics for interpretability purposes.
+
+Some of the scripts require the use of a database with human-annotated saliency information.
+
+Generation
+----------
+
+Saliency maps can be generated with the :ref:`saliency generate command <mednet.cli>`.
+They are represented as numpy arrays of the same size as thes images, with values in the range [0-1] and saved in .npy files.
+
+Several mapping algorithms are available to choose from, which can be specified with the -s option.
+
+Examples
+========
+
+Generates saliency maps for all prediction dataloaders on a datamodule,
+using a pre-trained pasa model, and saves them as numpy-pickeled
+objects on the output directory:
+
+.. code:: sh
+
+   mednet saliency generate -vv pasa tbx11k-v1-healthy-vs-atb --weight=path/to/model-at-lowest-validation-loss.ckpt --output-folder=path/to/output
+
+Viewing
+-------
+
+To overlay saliency maps over the original images, use the :ref:`saliency view command <mednet.cli>`.
+Results are saved as PNG images in which brigter pixels correspond to areas with higher saliency.
+
+Examples
+========
+
+Generates visualizations in form of heatmaps from existing saliency maps for a dataset configuration:
+
+.. code:: sh
+
+    # input-folder is the location of the saliency maps created with `mednet generate`
+    mednet saliency view -vv pasa tbx11k-v1-healthy-vs-atb --input-folder=parent_folder/gradcam/ --output-folder=path/to/visualizations
+
+
+Interpretability
+----------------
+
+Given a target label, the interpretability step computes the proportional energy and average saliency focus in a datamodule.
+
+The proportional energy is defined as the quantity of activation that lies within the ground truth boxes compared to the total sum of the activations.
+The average saliency focus is the sum of the values of the saliency map over the ground-truth bounding boxes, normalized by the total area covered by all ground-truth bounding boxes.
+
+This requires a datamodule containing human-annotated bounding boxes.
+
+Examples
+========
+
+Evaluate the generated saliency maps for their localization performance:
+
+.. code:: sh
+
+    mednet saliency interpretability -vv tbx11k-v1-healthy-vs-atb --input-folder=parent-folder/saliencies/ --output-json=path/to/interpretability-scores.json
+
+
+Completeness
+------------
+The saliency completeness script computes ROAD scores of saliency maps and saves them in a JSON file.
+
+The ROAD algorithm estimates the explainability (in the completeness sense) of saliency maps by substituting
+relevant pixels in the input image by a local average, re-running prediction on the altered image,
+and measuring changes in the output classification score when said perturbations are in place.
+By substituting most or least relevant pixels with surrounding averages, the ROAD algorithm estimates
+the importance of such elements in the produced saliency map.
+
+More information can be found in [ROAD-2022]_.
+
+This requires a datamodule containing human-annotated bounding boxes.
+
+Examples
+========
+
+Calculates the ROAD scores for an existing dataset configuration and stores them in .json files:
+
+.. code:: sh
+
+    mednet saliency completeness -vv pasa tbx11k-v1-healthy-vs-atb --device="cuda:0" --weight=path/to/model-at-lowest-validation-loss.ckpt --output-json=path/to/completeness-scores.json
+
+
+Evaluation
+----------
+The saliency evaluation step generates tables and plots from the results of the interpretability and completeness steps.
+
+Examples
+========
+
+Tabulates and generates plots for two saliency map algorithms:
+
+.. code:: sh
+
+    mednet saliency evaluate -vv -e gradcam path/to/gradcam-completeness.json path/to/gradcam-interpretability.json -e gradcam++ path/to/gradcam++-completeness.json path/to/gradcam++-interpretability.json
+
+.. include:: ../links.rst
diff --git a/src/mednet/engine/saliency/completeness.py b/src/mednet/engine/saliency/completeness.py
index c711c7b3272ddeeda4e645c4d436a4a18376610d..d2b031b27a7d64839bc5c63a19dc6ac29fc04f88 100644
--- a/src/mednet/engine/saliency/completeness.py
+++ b/src/mednet/engine/saliency/completeness.py
@@ -213,7 +213,7 @@ def run(
     classification score when said perturbations are in place.  By substituting
     most or least relevant pixels with surrounding averages, the ROAD algorithm
     estimates the importance of such elements in the produced saliency map.  As
-    2023, this measurement technique is considered to be one of the
+    of 2023, this measurement technique is considered to be one of the
     state-of-the-art metrics of explainability.
 
     This function returns a dictionary containing most-relevant-first (remove a
diff --git a/src/mednet/scripts/saliency/view.py b/src/mednet/scripts/saliency/view.py
index a583637d5bd1e29e473087b50864fb3c997a71fd..56f31fad9123698cd4423bc35f6e1960dc4a0379 100644
--- a/src/mednet/scripts/saliency/view.py
+++ b/src/mednet/scripts/saliency/view.py
@@ -56,7 +56,7 @@ logger = setup(__name__.split(".")[0], format="%(levelname)s: %(message)s")
 @click.option(
     "--output-folder",
     "-o",
-    help="Path where to store the ROAD scores (created if does not exist)",
+    help="Path where to store the visualizations (created if does not exist)",
     required=True,
     type=click.Path(
         file_okay=False,