Skip to content
Snippets Groups Projects
Commit 75cf19d0 authored by André Anjos's avatar André Anjos :speech_balloon:
Browse files

[doc] Improve documentation

parent 31c86afa
No related branches found
No related tags found
No related merge requests found
Pipeline #68969 failed
.. Copyright © 2023 Idiap Research Institute <contact@idiap.ch>
..
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _ptbench.usage.evaluation:
==========================
Inference and Evaluation
==========================
This guides explains how to run inference or a complete evaluation using
command-line tools. Inference produces probability of TB presence for input
images, while evaluation will analyze such output against existing annotations
and produce performance figures.
Inference
---------
In inference (or prediction) mode, we input data, the trained model, and output
a CSV file containing the prediction outputs for every input image.
To run inference, use the sub-command :ref:`predict <ptbench.cli>` to run
prediction on an existing dataset:
.. code:: sh
ptbench predict -vv <model> -w <path/to/model.pth> <dataset>
Replace ``<model>`` and ``<dataset>`` by the appropriate :ref:`configuration
files <ptbench.config>`. Replace ``<path/to/model.pth>`` to a path leading to
the pre-trained model.
.. tip::
An option to generate grad-CAMs is available for the :py:mod:`DensenetRS
<ptbench.configs.models_datasets.densenet_rs>` model. To activate it, use
the ``--grad-cams`` argument.
.. tip::
An option to generate a relevance analysis plot is available. To activate
it, use the ``--relevance-analysis`` argument.
Evaluation
----------
In evaluation, we input a dataset and predictions to generate performance
summaries that help analysis of a trained model. Evaluation is done using the
:ref:`evaluate command <ptbench.cli>` followed by the model and the annotated
dataset configuration, and the path to the pretrained weights via the
``--weight`` argument.
Use ``bob tb evaluate --help`` for more information.
E.g. run evaluation on predictions from the Montgomery set, do the following:
.. code:: sh
bob tb evaluate -vv montgomery -p /predictions/folder -o /eval/results/folder
Comparing Systems
-----------------
To compare multiple systems together and generate combined plots and tables,
use the :ref:`compare command <ptbench.cli>`. Use ``--help`` for a quick
guide.
.. code:: sh
ptbench compare -vv A A/metrics.csv B B/metrics.csv --output-figure=plot.pdf --output-table=table.txt --threshold=0.5
.. include:: ../links.rst
.. Copyright © 2023 Idiap Research Institute <contact@idiap.ch>
..
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _ptbench.usage.predtojson:
========================================
Converting predictions to JSON dataset
========================================
This guide explains how to convert radiological signs predictions from a model
into a JSON dataset. It can be used to create new versions of TB datasets with
the predicted radiological signs to be able to use a shallow model. We input
predictions (CSV files) and output a ``dataset.json`` file.
Use the sub-command :ref:`predtojson <ptbench.cli>` to create your JSON dataset
file:
.. code:: sh
ptbench predtojson -vv train train/predictions.csv test test/predictions.csv --output-folder=pred_to_json
.. include:: ../links.rst
.. Copyright © 2023 Idiap Research Institute <contact@idiap.ch>
..
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _ptbench.usage.training:
==========
Training
==========
Convolutional Neural Network (CNN)
----------------------------------
To train a new CNN, use the command-line interface (CLI) application ``ptbench
train``, available on your prompt. To use this CLI, you must define the input
dataset that will be used to train the CNN, as well as the type of model that
will be trained. You may issue ``ptbench train --help`` for a help message
containing more detailed instructions.
.. tip::
We strongly advice training with a GPU (using ``--device="cuda:0"``).
Depending on the available GPU memory you might have to adjust your batch
size (``--batch``).
Examples
========
To train Pasa CNN on the Montgomery dataset:
.. code:: sh
ptbench train -vv pasa montgomery --batch-size=4 --epochs=150
To train DensenetRS CNN on the NIH CXR14 dataset:
.. code:: sh
ptbench train -vv nih_cxr14 densenet_rs --batch-size=8 --epochs=10
Logistic regressor or shallow network
-------------------------------------
To train a logistic regressor or a shallow network, use the command-line
interface (CLI) application ``ptbench train``, available on your prompt. To use
this CLI, you must define the input dataset that will be used to train the
model, as well as the type of model that will be trained.
You may issue ``ptbench train --help`` for a help message containing more
detailed instructions.
Examples
========
To train a logistic regressor using predictions from DensenetForRS on the
Montgomery dataset:
.. code:: sh
ptbench train -vv logistic_regression montgomery_rs --batch-size=4 --epochs=20
To train Signs_to_TB using predictions from DensenetForRS on the Shenzhen
dataset:
.. code:: sh
ptbench train -vv signs_to_tb shenzhen_rs --batch-size=4 --epochs=20
.. include:: ../links.rst
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
[project] [project]
name = "ptbench" name = "ptbench"
version = "0.0.1b0" version = "1.0.0b0"
requires-python = ">=3.9" requires-python = ">=3.9"
description = "Benchmarks for training and evaluating deep models for the detection of active Pulmonary Tuberculosis from Chest X-Ray imaging." description = "Benchmarks for training and evaluating deep models for the detection of active Pulmonary Tuberculosis from Chest X-Ray imaging."
dynamic = ["readme"] dynamic = ["readme"]
...@@ -56,12 +56,11 @@ qa = ["pre-commit"] ...@@ -56,12 +56,11 @@ qa = ["pre-commit"]
doc = [ doc = [
"sphinx", "sphinx",
"furo", "furo",
"auto-intersphinx",
"sphinx-autodoc-typehints", "sphinx-autodoc-typehints",
"sphinxcontrib-programoutput",
"auto-intersphinx", "auto-intersphinx",
"sphinx-copybutton", "sphinx-copybutton",
"sphinx-inline-tabs", "sphinx-inline-tabs",
"sphinx-click",
] ]
test = [ test = [
"pytest", "pytest",
......
...@@ -5,11 +5,12 @@ ...@@ -5,11 +5,12 @@
"""Image transformations for our pipelines. """Image transformations for our pipelines.
Differences between methods here and those from Differences between methods here and those from
:py:mod:`torchvision.transforms` is that these support multiple simultaneous :py:mod:`torchvision.transforms` is that these support multiple
image inputs, which are required to feed segmentation networks (e.g. image and simultaneous image inputs, which are required to feed segmentation
labels or masks). We also take care of data augmentations, in which random networks (e.g. image and labels or masks). We also take care of data
flipping and rotation needs to be applied across all input images, but color augmentations, in which random flipping and rotation needs to be applied
jittering, for example, only on the input image. across all input images, but color jittering, for example, only on the
input image.
""" """
import random import random
...@@ -55,10 +56,6 @@ class RemoveBlackBorders: ...@@ -55,10 +56,6 @@ class RemoveBlackBorders:
class ElasticDeformation: class ElasticDeformation:
"""Elastic deformation of 2D image slightly adapted from [SIMARD-2003]_. """Elastic deformation of 2D image slightly adapted from [SIMARD-2003]_.
.. [SIMARD-2003] Simard, Steinkraus and Platt, "Best Practices for
Convolutional Neural Networks applied to Visual Document Analysis", in
Proc. of the International Conference on Document Analysis and
Recognition, 2003.
Source: https://gist.github.com/oeway/2e3b989e0343f0884388ed7ed82eb3b0 Source: https://gist.github.com/oeway/2e3b989e0343f0884388ed7ed82eb3b0
""" """
......
...@@ -26,9 +26,11 @@ def eer_threshold(neg, pos) -> float: ...@@ -26,9 +26,11 @@ def eer_threshold(neg, pos) -> float:
Parameters Parameters
---------- ----------
neg: Negative scores neg : typing.Iterable[float]
Negative scores
pos: Positive scores pos : typing.Iterable[float]
Positive scores
Returns: Returns:
......
...@@ -354,7 +354,7 @@ def checkpointer_process( ...@@ -354,7 +354,7 @@ def checkpointer_process(
Parameters Parameters
---------- ----------
checkpointer : :py:class:`bob.med.tb.utils.checkpointer.Checkpointer` checkpointer : :py:class:`ptbench.utils.checkpointer.Checkpointer`
checkpointer implementation checkpointer implementation
checkpoint_period : int checkpoint_period : int
...@@ -517,7 +517,7 @@ def run( ...@@ -517,7 +517,7 @@ def run(
criterion : :py:class:`torch.nn.modules.loss._Loss` criterion : :py:class:`torch.nn.modules.loss._Loss`
loss function loss function
checkpointer : :py:class:`bob.med.tb.utils.checkpointer.Checkpointer` checkpointer : :py:class:`ptbench.utils.checkpointer.Checkpointer`
checkpointer implementation checkpointer implementation
checkpoint_period : int checkpoint_period : int
......
...@@ -60,7 +60,7 @@ def _load(data): ...@@ -60,7 +60,7 @@ def _load(data):
epilog="""Examples: epilog="""Examples:
\b \b
1. Convert predictions of radiological signs to a JSON dataset file_ 1. Convert predictions of radiological signs to a JSON dataset file:
.. code:: sh .. code:: sh
......
...@@ -2,6 +2,8 @@ ...@@ -2,6 +2,8 @@
# #
# SPDX-License-Identifier: GPL-3.0-or-later # SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import annotations
import os import os
import click import click
...@@ -19,15 +21,14 @@ def _loss_evolution(df): ...@@ -19,15 +21,14 @@ def _loss_evolution(df):
Parameters Parameters
---------- ----------
df : pandas.DataFrame df : pandas.DataFrame
dataframe containing the training logs dataframe containing the training logs
Returns Returns
------- -------
figure : matplotlib.figure.Figure matplotlib.figure.Figure: Figure to be displayed or saved to file
figure to be displayed or saved to file
""" """
import numpy import numpy
...@@ -78,18 +79,17 @@ def _hardware_utilisation(df, const): ...@@ -78,18 +79,17 @@ def _hardware_utilisation(df, const):
Parameters Parameters
---------- ----------
df : pandas.DataFrame df : pandas.DataFrame
dataframe containing the training logs dataframe containing the training logs
const : dict const : dict
training and hardware constants training and hardware constants
Returns Returns
------- -------
figure : matplotlib.figure.Figure matplotlib.figure.Figure: figure to be displayed or saved to file
figure to be displayed or saved to file
""" """
figure = plt.figure() figure = plt.figure()
...@@ -133,14 +133,16 @@ def _hardware_utilisation(df, const): ...@@ -133,14 +133,16 @@ def _hardware_utilisation(df, const):
@click.command( @click.command(
entry_point_group="bob.med.tb.config", entry_point_group="ptbench.config",
cls=ConfigCommand, cls=ConfigCommand,
epilog="""Examples: epilog="""Examples:
\b \b
1. Analyzes a training log and produces various plots: 1. Analyzes a training log and produces various plots:
$ bob binseg train-analysis -vv log.csv constants.csv .. code:: sh
ptbench train-analysis -vv log.csv constants.csv
""", """,
) )
...@@ -167,7 +169,7 @@ def train_analysis( ...@@ -167,7 +169,7 @@ def train_analysis(
output_pdf, output_pdf,
**_, **_,
): ):
"""Analyze the training logs for loss evolution and resource """Analyzes the training logs for loss evolution and resource
utilisation.""" utilisation."""
import pandas import pandas
......
...@@ -27,7 +27,7 @@ def download_to_tempfile(url, progress=False): ...@@ -27,7 +27,7 @@ def download_to_tempfile(url, progress=False):
Returns Returns
------- -------
f : tempfile.NamedTemporaryFile f : :py:func:`tempfile.NamedTemporaryFile`
A named temporary file that contains the downloaded URL A named temporary file that contains the downloaded URL
""" """
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment