Skip to content
Snippets Groups Projects

WIP: Config for complete PAD experiment

Closed Olegs NIKISINS requested to merge ae_pad_config into master
1 file
+ 23
0
Compare changes
  • Side-by-side
  • Inline
+ 23
0
@@ -105,3 +105,26 @@ To prepare the training data one can use the following command:
Once above script is completed, the MC data suitable for autoencoder fine-tuning is located in the folder ``<PATH_TO_STORE_THE_RESULTS>/preprocessed/``.
Now the autoencoder can be fine-tuned. Again, the fine-tuning procedure is explained in the **Convolutional autoencoder** section in the documentation of the ``bob.learn.pytorch`` package.
3. Train an MLP using multi-channel autoencoder latent embeddings from WMCA
=================================================================================
Once auto-encoders are pre-trained and fine-tuned, the latent embeddings can be computed passing the multi-channel (MC) BW-NIR-D images from the WMCA database through the encoder, see [NGM19]_ for more details. These latent embeddings (feature vectors) are next used to train an MLP classifying input MC samples into bona-fide or attack classes.
To compute the latent embeddings (encoder output) the following command can be used:
.. code-block:: sh
./bin/spoof.py \ # spoof.py is used to run the preprocessor
batl-db-rgb-ir-d-grandtest \ # WMCA database instance allowing to load RGB-NIR-D channels
mc-pad-bw-nir-d-128x128-face-autoencoder-10-relu \ # configuration defining preprocessor and extractor instances
--skip-projector-training --skip-projection --skip-score-computation --allow-missing-files \ # run only preprocessing and extraction steps
--sub-directory <PATH_TO_STORE_THE_RESULTS> # define your path here
.. note::
Make sure the ``bob.learn.pytorch`` and ``bob.ip.pytorch_extractor`` packages are installed before running above command.
Once above script is completed, the MC latent encodings to be used for MLP training are located in the folder ``<PATH_TO_STORE_THE_RESULTS>/extracted/``.
Again, the training procedure is explained in the **MLP** section in the documentation of the ``bob.learn.pytorch`` package.
Loading