Skip to content
Snippets Groups Projects
Commit c726eab3 authored by Daniel CARRON's avatar Daniel CARRON :b:
Browse files

[doc] Add documentation on saliency commands

parent 871b2ddc
No related branches found
No related tags found
1 merge request!15Update documentation
...@@ -84,6 +84,7 @@ Commands ...@@ -84,6 +84,7 @@ Commands
experiment experiment
training training
evaluation evaluation
saliency
.. include:: ../links.rst .. include:: ../links.rst
.. Copyright © 2023 Idiap Research Institute <contact@idiap.ch>
..
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _mednet.usage.saliency:
==========
Saliency
==========
A saliency map highlights areas of interest within an image. In the context of TB detection, this would be the locations in a chest X-ray image where tuberculosis is present.
This package provides scripts that can generate saliency maps and compute relevant metrics for interpretability purposes.
Some of the scripts require the use of a database with human-annotated saliency information.
Generation
----------
Saliency maps can be generated with the :ref:`saliency generate command <mednet.cli>`.
They are represented as numpy arrays of the same size as thes images, with values in the range [0-1] and saved in .npy files.
Several mapping algorithms are available to choose from, which can be specified with the -s option.
Examples
========
Generates saliency maps for all prediction dataloaders on a datamodule,
using a pre-trained pasa model, and saves them as numpy-pickeled
objects on the output directory:
.. code:: sh
mednet saliency generate -vv pasa tbx11k-v1-healthy-vs-atb --weight=path/to/model-at-lowest-validation-loss.ckpt --output-folder=path/to/output
Viewing
-------
To overlay saliency maps over the original images, use the :ref:`saliency view command <mednet.cli>`.
Results are saved as PNG images in which brigter pixels correspond to areas with higher saliency.
Examples
========
Generates visualizations in form of heatmaps from existing saliency maps for a dataset configuration:
.. code:: sh
# input-folder is the location of the saliency maps created with `mednet generate`
mednet saliency view -vv pasa tbx11k-v1-healthy-vs-atb --input-folder=parent_folder/gradcam/ --output-folder=path/to/visualizations
Interpretability
----------------
Given a target label, the interpretability step computes the proportional energy and average saliency focus in a datamodule.
The proportional energy is defined as the quantity of activation that lies within the ground truth boxes compared to the total sum of the activations.
The average saliency focus is the sum of the values of the saliency map over the ground-truth bounding boxes, normalized by the total area covered by all ground-truth bounding boxes.
This requires a datamodule containing human-annotated bounding boxes.
Examples
========
Evaluate the generated saliency maps for their localization performance:
.. code:: sh
mednet saliency interpretability -vv tbx11k-v1-healthy-vs-atb --input-folder=parent-folder/saliencies/ --output-json=path/to/interpretability-scores.json
Completeness
------------
The saliency completeness script computes ROAD scores of saliency maps and saves them in a JSON file.
The ROAD algorithm estimates the explainability (in the completeness sense) of saliency maps by substituting
relevant pixels in the input image by a local average, re-running prediction on the altered image,
and measuring changes in the output classification score when said perturbations are in place.
By substituting most or least relevant pixels with surrounding averages, the ROAD algorithm estimates
the importance of such elements in the produced saliency map.
More information can be found in [ROAD-2022]_.
This requires a datamodule containing human-annotated bounding boxes.
Examples
========
Calculates the ROAD scores for an existing dataset configuration and stores them in .json files:
.. code:: sh
mednet saliency completeness -vv pasa tbx11k-v1-healthy-vs-atb --device="cuda:0" --weight=path/to/model-at-lowest-validation-loss.ckpt --output-json=path/to/completeness-scores.json
Evaluation
----------
The saliency evaluation step generates tables and plots from the results of the interpretability and completeness steps.
Examples
========
Tabulates and generates plots for two saliency map algorithms:
.. code:: sh
mednet saliency evaluate -vv -e gradcam path/to/gradcam-completeness.json path/to/gradcam-interpretability.json -e gradcam++ path/to/gradcam++-completeness.json path/to/gradcam++-interpretability.json
.. include:: ../links.rst
...@@ -213,7 +213,7 @@ def run( ...@@ -213,7 +213,7 @@ def run(
classification score when said perturbations are in place. By substituting classification score when said perturbations are in place. By substituting
most or least relevant pixels with surrounding averages, the ROAD algorithm most or least relevant pixels with surrounding averages, the ROAD algorithm
estimates the importance of such elements in the produced saliency map. As estimates the importance of such elements in the produced saliency map. As
2023, this measurement technique is considered to be one of the of 2023, this measurement technique is considered to be one of the
state-of-the-art metrics of explainability. state-of-the-art metrics of explainability.
This function returns a dictionary containing most-relevant-first (remove a This function returns a dictionary containing most-relevant-first (remove a
......
...@@ -56,7 +56,7 @@ logger = setup(__name__.split(".")[0], format="%(levelname)s: %(message)s") ...@@ -56,7 +56,7 @@ logger = setup(__name__.split(".")[0], format="%(levelname)s: %(message)s")
@click.option( @click.option(
"--output-folder", "--output-folder",
"-o", "-o",
help="Path where to store the ROAD scores (created if does not exist)", help="Path where to store the visualizations (created if does not exist)",
required=True, required=True,
type=click.Path( type=click.Path(
file_okay=False, file_okay=False,
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment