--grid idiap \ # use grid, only for Idiap users, remove otherwise
--grid idiap \ # use grid, only for Idiap users, remove otherwise
--groups train \ # preprocess only training set of CelebA
--groups train \ # preprocess only training set of CelebA
--preprocessor rgb-face-detect-check-quality-128x128 \ # preprocessor entry point
--preprocessor rgb-face-detect-check-quality-128x128 \ # preprocessor entry point
--sub-directory <PATH_TO_STORE_THE_RESULTS> # define your path here
--sub-directory <PATH_TO_STORE_THE_RESULTS> # define your path here
Running above command, the RGB facial images are aligned and cropped from the training set of the CelebA database. Additionally, a quality assessment is applied to each facial image.
Running above command, the RGB facial images are aligned and cropped from the training set of the CelebA database. Additionally, a quality assessment is applied to each facial image.
More specifically, an eye detection algorithm is applied to face images, assuring the deviation of eye coordinates from expected positions is not significant.
More specifically, an eye detection algorithm is applied to face images, assuring the deviation of eye coordinates from expected positions is not significant.
See [NGM19]_ for more details.
See [NGM19]_ for more details.
Once above script is completed, the data suitable for autoencoder training is located in the folder ``<PATH_TO_STORE_THE_RESULTS>/preprocessed/``. Now the autoencoder can be trained.
The training procedure is explained in the **Convolutional autoencoder** section in the documentation of the ``bob.learn.pytorch`` package.
.. note::
Functionality of ``bob.pad.face`` is used to compute the training data. Install and follow the documentation of ``bob.learn.pytorch`` to train the autoencoders. This functional decoupling helps to avoid the dependency of