Commit d6d519c9 authored by Olegs NIKISINS's avatar Olegs NIKISINS
Browse files

Added a doc on fine-tuning Conv AE on MC data, and image

parent fcb1057f
Pipeline #26311 passed with stage
in 7 minutes and 50 seconds
.. py:currentmodule:: bob.learn.pytorch
=============================
Convolutional autoencoder
=============================
Autoencoder training on RGB facial data
===========================================================
This section introduces a work-flow for training a convolutional autoencoder. An autoencoder discussed in this section is introduced the following publication [NGM19]_. It is recommended to check the publication for better understanding of the architecture of the autoencoder, as well as for potential application of autoencoders in biometrics (face PAD in this case).
As an example, to train an autoencoder on facial images extracted from the CelebA database, you can use the following command:
.. code-block:: sh
./bin/train_autoencoder.py \ # script used for autoencoders training, can be used for other networks as-well
./bin/train_autoencoder.py \ # script used for autoencoders training, can be used for other networks as-well
<FOLDER_CONTAINING_TRAINING_DATA> \ # substitute the path pointing to training data
<FOLDER_TO_SAVE_THE_RESULTS>/ \ # substitute the path to save the results to
-c autoencoder/net1_celeba.py \ # configuration file defining the AE, database, and training parameters
......@@ -53,6 +58,54 @@ the following reconstructions produced by an autoencoder:
Output of convolutional autoencoder after 70 epochs of training
Autoencoder fine-tuning on the multi-channel facial data
===========================================================
This section is useful for those trying to reproduce the results form [NGM19]_, or for demonstrative purposes showing the capabilities of ``train_autoencoder.py`` script.
Following the training procedure of [NGM19]_, one might want to fine-tune the pre-trained autoencoder on the multi-channel (**MC**) facial data.
In this example, MC training data is a stack of gray-scale, NIR, and Depth (BW-NIR-D) facial images extracted from WMCA face PAD database.
For an explicit example on how to generate the MC (BW-NIR-D) training data, please refer to the section entitled **Multi-channel face PAD using autoencoders** in the
documentation of ``bob.pad.face`` package.
Once the training data is computed, you can use the below command to fine-tune an autoencoder, pre-trained on the RGB data. In this case **all** layers of an
autoencoder are fine-tuned.
.. code-block:: sh
./bin/train_autoencoder.py \ # script used for autoencoders training, can be used for other networks as-well
<FOLDER_CONTAINING_TRAINING_DATA> \ # substitute the path pointing to training data
<FOLDER_TO_SAVE_THE_RESULTS>/ \ # substitute the path to save the results to
-p <FOLDER_CONTAINING_RGB_AE_MODELS>/model_70.pth \ # initialize the AE with the model obtained during RGB pre-training
-c autoencoder/net1_batl.py \ # configuration file defining the AE, database, and training parameters
-cg bob.learn.pytorch.config \ # name of the group containing the configuration file
-cv \ # compute a loss on CV set after each epoch
-vv # set verbosity level
Below is the command allowing to fine-tine just **one layer of encoder**, which performs better in face PAD task according to findings in [NGM19]_.
.. code-block:: sh
./bin/train_autoencoder.py \ # script used for autoencoders training, can be used for other networks as-well
<FOLDER_CONTAINING_TRAINING_DATA> \ # substitute the path pointing to training data
<FOLDER_TO_SAVE_THE_RESULTS>/ \ # substitute the path to save the results to
-p <FOLDER_CONTAINING_RGB_AE_MODELS>/model_70.pth \ # initialize the AE with the model obtained during RGB pre-training
-c autoencoder/net1_batl_3_layers_partial.py \ # configuration file defining the AE, database, and training parameters
-cg bob.learn.pytorch.config \ # name of the group containing the configuration file
-cv \ # compute a loss on CV set after each epoch
-vv # set verbosity level
.. note::
People in Idiap can benefit from GPU cluster, running training commands similar to an example in the previous section.
.. figure:: img/mc_conv_ae_3_layers_tuning_50_epochs.png
:align: center
Output of convolutional autoencoder after 50 epochs of **partial** fine-tuning on BW-NIR-D data.
.. [NGM19] *O. Nikisins, A. George, S. Marcel*, **Domain Adaptation in Multi-Channel Autoencoder based Features for Robust Face Anti-Spoofing**,
in: Submitted to: 2019 International Conference on Biometrics (ICB), 2019.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment