From b646ec51f0f7d14b96eef42fe704d25b38e7bbf6 Mon Sep 17 00:00:00 2001
From: Olegs NIKISINS <onikisins@italix03.idiap.ch>
Date: Wed, 23 Jan 2019 11:14:02 +0100
Subject: [PATCH] Updated the doc on AE pre-training, pointing to
 bob.learn.pytorch

---
 doc/mc_autoencoder_pad.rst | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/doc/mc_autoencoder_pad.rst b/doc/mc_autoencoder_pad.rst
index 77edfb63..f59cbdf1 100644
--- a/doc/mc_autoencoder_pad.rst
+++ b/doc/mc_autoencoder_pad.rst
@@ -58,19 +58,27 @@ To prepare the training data one can use the following command:
 
 .. code-block:: sh
 
-    ./bin/spoof.py \    # spoof.py is used to run the preprocessor
-    celeb-a \   # run for CelebA database
-    lbp-svm \   # required by spoof.py, but unused
+    ./bin/spoof.py \                                        # spoof.py is used to run the preprocessor
+    celeb-a \                                               # run for CelebA database
+    lbp-svm \                                               # required by spoof.py, but unused
     --skip-extractor-training --skip-extraction --skip-projector-training --skip-projection --skip-score-computation --allow-missing-files \    # execute only preprocessing step
-    --grid idiap \    # use grid, only for Idiap users, remove otherwise
-    --groups train \    # preprocess only training set of CelebA
-    --preprocessor rgb-face-detect-check-quality-128x128 \    # preprocessor entry point
-    --sub-directory <PATH_TO_STORE_THE_RESULTS>   # define your path here
+    --grid idiap \                                          # use grid, only for Idiap users, remove otherwise
+    --groups train \                                        # preprocess only training set of CelebA
+    --preprocessor rgb-face-detect-check-quality-128x128 \  # preprocessor entry point
+    --sub-directory <PATH_TO_STORE_THE_RESULTS>             # define your path here
 
 Running above command, the RGB facial images are aligned and cropped from the training set of the CelebA database. Additionally, a quality assessment is applied to each facial image.
 More specifically, an eye detection algorithm is applied to face images, assuring the deviation of eye coordinates from expected positions is not significant.
 See [NGM19]_ for more details.
 
+Once above script is completed, the data suitable for autoencoder training is located in the folder ``<PATH_TO_STORE_THE_RESULTS>/preprocessed/``. Now the autoencoder can be trained.
+The training procedure is explained in the **Convolutional autoencoder** section in the documentation of the ``bob.learn.pytorch`` package.
+
+.. note::
+
+  Functionality of ``bob.pad.face`` is used to compute the training data. Install and follow the documentation of ``bob.learn.pytorch`` to train the autoencoders. This functional decoupling helps to avoid the dependency of
+  ``bob.pad.face`` from **PyTorch**.
+
 
 .. include:: links.rst
 
-- 
GitLab