From 00a7505938f7b52c1eed661dd87b206e0806b537 Mon Sep 17 00:00:00 2001
From: Olegs NIKISINS <onikisins@italix03.idiap.ch>
Date: Tue, 22 Jan 2019 17:35:00 +0100
Subject: [PATCH] Added the step 1 in the doc on MC autoencoders for face PAD

---
 doc/index.rst              |  1 +
 doc/mc_autoencoder_pad.rst | 80 ++++++++++++++++++++++++++++++++++++++
 doc/references.rst         |  3 ++
 3 files changed, 84 insertions(+)
 create mode 100644 doc/mc_autoencoder_pad.rst

diff --git a/doc/index.rst b/doc/index.rst
index 0fef7039..20747b6c 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -23,6 +23,7 @@ Users Guide
    baselines
    other_pad_algorithms
    pulse
+   mc_autoencoder_pad
    references
    resources
    api
diff --git a/doc/mc_autoencoder_pad.rst b/doc/mc_autoencoder_pad.rst
new file mode 100644
index 00000000..77edfb63
--- /dev/null
+++ b/doc/mc_autoencoder_pad.rst
@@ -0,0 +1,80 @@
+
+
+.. _bob.pad.face.mc_autoencoder_pad:
+
+
+=============================================
+ Multi-channel face PAD using autoencoders
+=============================================
+
+This section explains how to run a complete face PAD experiment using multi-channel autoencoder-based face PAD system, as well as a training work-flow.
+
+The system discussed in this section is introduced the following publication [NGM19]_. It is **strongly recommended** to check the publication for better understanding
+of the described work-flow.
+
+.. warning::
+
+   Algorithms introduced in this section might be in the process of publishing. Therefore, it is not
+   allowed to publish results introduced in this section without permission of the owner of the package.
+   If you are planning to use the results from this section, please contact the owner of the package first.
+   Please check the ``setup.py`` for contact information.
+
+
+Running face PAD Experiments
+------------------------------
+
+Please refer to :ref:`bob.pad.face.baselines` section of current documentation for more details on how to run the face PAD experiments and setup the databases.
+
+
+Training multi-channel autoencoder-based face PAD system.
+----------------------------------------------------------------
+
+As introduced in the paper [NGM19]_, the training of the system is composed of three main steps, which are summarize in the following table:
+
++----------------------+----------------------+---------------------+
+| Train step           | Training data        | DB, classes used    |
++----------------------+----------------------+---------------------+
+| Train N AEs          | RGB face regions     | CelebA, BF          |
++----------------------+----------------------+---------------------+
+| Fine-tune N AEs      | MC face regions      | WMCA, BF            |
++----------------------+----------------------+---------------------+
+| Train an MLP         | MC latent encodings  | WMCA, BF and PA     |
++----------------------+----------------------+---------------------+
+
+In the above table, **BF** and **PA** stands for samples from **bona-fide** and **presentation attack** classes.
+
+As one can conclude from the table, CelebA and WMCA databases must be installed before the training can take place.
+See :ref:`bob.pad.face.baselines` for databases installation details.
+
+
+1. Train N AEs on RGB data from CelebA
+===========================================
+
+In [NGM19]_ N autoencoders are trained, one for each facial region, here for explanatory purposes, a system containing **one** autoencoder is observed, thus N=1.
+This autoencoder is first pre-trained using RGB images of entire face, which are cropped from CelebA database.
+
+To prepare the training data one can use the following command:
+
+
+.. code-block:: sh
+
+    ./bin/spoof.py \    # spoof.py is used to run the preprocessor
+    celeb-a \   # run for CelebA database
+    lbp-svm \   # required by spoof.py, but unused
+    --skip-extractor-training --skip-extraction --skip-projector-training --skip-projection --skip-score-computation --allow-missing-files \    # execute only preprocessing step
+    --grid idiap \    # use grid, only for Idiap users, remove otherwise
+    --groups train \    # preprocess only training set of CelebA
+    --preprocessor rgb-face-detect-check-quality-128x128 \    # preprocessor entry point
+    --sub-directory <PATH_TO_STORE_THE_RESULTS>   # define your path here
+
+Running above command, the RGB facial images are aligned and cropped from the training set of the CelebA database. Additionally, a quality assessment is applied to each facial image.
+More specifically, an eye detection algorithm is applied to face images, assuring the deviation of eye coordinates from expected positions is not significant.
+See [NGM19]_ for more details.
+
+
+.. include:: links.rst
+
+
+
+
+
diff --git a/doc/references.rst b/doc/references.rst
index 12e00f65..6276842f 100644
--- a/doc/references.rst
+++ b/doc/references.rst
@@ -18,3 +18,6 @@ References
 
 .. [CDSR17] *C. Chen, A. Dantcheva, T. Swearingen, A. Ross*, **Spoofing Faces Using Makeup: An Investigative Study**,
             in: Proc. of 3rd IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), (New Delhi, India), February 2017.
+
+.. [NGM19] *O. Nikisins, A. George, S. Marcel*, **Domain Adaptation in Multi-Channel Autoencoder based Features for Robust Face Anti-Spoofing**,
+            in: Submitted to: 2019 International Conference on Biometrics (ICB), 2019.
-- 
GitLab