Commit 61202ef7 authored by Olegs NIKISINS's avatar Olegs NIKISINS
Browse files

Updated the documentation on AEs, trained on CelebA

parent 1876beb7
Pipeline #26268 passed with stage
in 8 minutes and 17 seconds
.. py:currentmodule:: bob.learn.pytorch
Convolutional autoencoder
=============================
This section introduces a work-flow for training a convolutional autoencoder. An autoencoder discussed in this section is introduced the following publication [NGM19]_. It is recommended to check the publication for better understanding of the architecture of the autoencoder, as well as for potential application of autoencoders in biometrics (face PAD in this case).
As an example, to train an autoencoder on facial images extracted from the CelebA database, you can use the following command:
.. code-block:: sh
./bin/pytorch-train-autoencoder-pad.py \ # script used for autoencoders training, can be used for other networks as-well
<FOLDER_CONTAINING_TRAINING_DATA> \ # substitute the path pointing to training data
<FOLDER_TO_SAVE_THE_RESULTS>/ \ # substitute the path to save the results to
-c autoencoder/net1_celeba.py \ # configuration file defining the AE, database, and training parameters
-cg bob.learn.pytorch.config \ # name of the group containing the configuration file
-vv # set verbosity level
People in Idiap can benefit from GPU cluster, running the training as follows:
.. code-block:: sh
jman submit --queue gpu \ # submit to GPU queue (Idiap only)
--name <NAME_OF_EXPERIMENT> \ # define the name of th job (Idiap only)
--log-dir <FOLDER_TO_SAVE_THE_RESULTS>/logs/ \ # substitute the path to save the logs to (Idiap only)
--environment="PYTHONUNBUFFERED=1" -- \ #
./bin/pytorch-train-autoencoder-pad.py \ # script used for autoencoders training, cand be used for other networks as-well
<FOLDER_CONTAINING_TRAINING_DATA> \ # substitute the path pointing to training data
<FOLDER_TO_SAVE_THE_RESULTS>/ \ # substitute the path to save the results to
-c autoencoder/net1_celeba.py \ # configuration file defining the AE, database, and training parameters
-cg bob.learn.pytorch.config \ # name of the group containing the configuration file
-gpu \ # enable the GPU mode
-vv # set verbosity level
For a more detailed documentation of functionality available in the training script, run the following command:
.. code-block:: sh
./bin/pytorch-train-autoencoder-pad.py --help # note: remove ./bin/ if buildout is not used
Please inspect the corresponding configuration file, ``net1_celeba.py`` for example, for more details on how to define the database, network architecture and training parameters.
Another key-component of the training procedure is availability of the training data. Please refer to the section entitled **Multi-channel face PAD using autoencoders** in the
documentation of ``bob.pad.face`` package, for an explicit example on how to compute the data suitable for training of the discussed network.
Using RGB facial images extracted from CelebA database, after ``NUM_EPOCHS = 70`` epochs of training, as defined in the ``net1_celeba.py`` configuration file, one can find in the ``<FOLDER_TO_SAVE_THE_RESULTS>``
the following reconstructions produced by an autoencoder:
.. figure:: img/conv_ae_after_70_epochs.png
:align: center
Output of convolutional autoencoder after 70 epochs of training
.. [NGM19] *O. Nikisins, A. George, S. Marcel*, **Domain Adaptation in Multi-Channel Autoencoder based Features for Robust Face Anti-Spoofing**,
in: Submitted to: 2019 International Conference on Biometrics (ICB), 2019.
......@@ -15,6 +15,7 @@ Users Guide
user_guide.rst
guide_dcgan.rst
guide_conditionalgan.rst
guide_conv_autoencoder.rst
================
Reference Manual
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment