@@ -14,7 +14,7 @@ As an example, to train an autoencoder on facial images extracted from the Celeb
.. code-block:: sh
./bin/train_autoencoder.py \ # script used for autoencoders training, can be used for other networks as-well
./bin/train_network.py \ # script used for autoencoders training, can be used for other networks as-well
<FOLDER_CONTAINING_TRAINING_DATA> \ # substitute the path pointing to training data
<FOLDER_TO_SAVE_THE_RESULTS>/ \ # substitute the path to save the results to
-c autoencoder/net1_celeba.py \ # configuration file defining the AE, database, and training parameters
...
...
@@ -29,7 +29,7 @@ People in Idiap can benefit from GPU cluster, running the training as follows:
--name <NAME_OF_EXPERIMENT> \ # define the name of th job (Idiap only)
--log-dir <FOLDER_TO_SAVE_THE_RESULTS>/logs/ \ # substitute the path to save the logs to (Idiap only)
--environment="PYTHONUNBUFFERED=1" -- \ #
./bin/train_autoencoder.py \ # script used for autoencoders training, cand be used for other networks as-well
./bin/train_network.py \ # script used for autoencoders training, cand be used for other networks as-well
<FOLDER_CONTAINING_TRAINING_DATA> \ # substitute the path pointing to training data
<FOLDER_TO_SAVE_THE_RESULTS>/ \ # substitute the path to save the results to
-c autoencoder/net1_celeba.py \ # configuration file defining the AE, database, and training parameters
...
...
@@ -42,7 +42,7 @@ For a more detailed documentation of functionality available in the training scr
.. code-block:: sh
./bin/train_autoencoder.py --help # note: remove ./bin/ if buildout is not used
./bin/train_network.py --help # note: remove ./bin/ if buildout is not used
Please inspect the corresponding configuration file, ``net1_celeba.py`` for example, for more details on how to define the database, network architecture and training parameters.
...
...
@@ -61,7 +61,7 @@ the following reconstructions produced by an autoencoder:
Autoencoder fine-tuning on the multi-channel facial data
This section is useful for those trying to reproduce the results form [NGM19]_, or for demonstrative purposes showing the capabilities of ``train_autoencoder.py`` script.
This section is useful for those trying to reproduce the results form [NGM19]_, or for demonstrative purposes showing the capabilities of ``train_network.py`` script.
Following the training procedure of [NGM19]_, one might want to fine-tune the pre-trained autoencoder on the multi-channel (**MC**) facial data.
In this example, MC training data is a stack of gray-scale, NIR, and Depth (BW-NIR-D) facial images extracted from WMCA face PAD database.
...
...
@@ -74,7 +74,7 @@ autoencoder are fine-tuned.
.. code-block:: sh
./bin/train_autoencoder.py \ # script used for autoencoders training, can be used for other networks as-well
./bin/train_network.py \ # script used for autoencoders training, can be used for other networks as-well
<FOLDER_CONTAINING_TRAINING_DATA> \ # substitute the path pointing to training data
<FOLDER_TO_SAVE_THE_RESULTS>/ \ # substitute the path to save the results to
-p <FOLDER_CONTAINING_RGB_AE_MODELS>/model_70.pth \ # initialize the AE with the model obtained during RGB pre-training
...
...
@@ -87,7 +87,7 @@ Below is the command allowing to fine-tine just **one layer of encoder**, which
.. code-block:: sh
./bin/train_autoencoder.py \ # script used for autoencoders training, can be used for other networks as-well
./bin/train_network.py \ # script used for autoencoders training, can be used for other networks as-well
<FOLDER_CONTAINING_TRAINING_DATA> \ # substitute the path pointing to training data
<FOLDER_TO_SAVE_THE_RESULTS>/ \ # substitute the path to save the results to
-p <FOLDER_CONTAINING_RGB_AE_MODELS>/model_70.pth \ # initialize the AE with the model obtained during RGB pre-training
@@ -17,7 +17,7 @@ As an example, to train an autoencoder on latent embeddings extracted from an en
.. code-block:: sh
./bin/train_autoencoder.py \ # script used for MLP training, can be used for other networks as-well
./bin/train_network.py \ # script used for MLP training, can be used for other networks as-well
<FOLDER_CONTAINING_TRAINING_DATA> \ # substitute the path pointing to training data
<FOLDER_TO_SAVE_THE_RESULTS>/ \ # substitute the path to save the results to
-c mlp/batl_db_1296x10_relu_mlp.py \ # configuration file defining the database, training parameters, transformation to be applied to training data, and an MLP architecture