The experiments described in this paper are based on 2 NIR datasets for face PAD.
The dataset **WMCA** used in this study should be downloaded from Idiap's server. The metadata used for WMCA dataset should also be downloaded from Idiap. The experiments related to the present work require only the subset captured in NIR.
The second dataset, **MLFP**, should be downloaded from www.iab-rubric.org
by contacting its owners. The experiments in this paper require only the subset captured in NIR.
The MLFP dataset may be available in different data structures or files than the ones compatible to the present code.
We have provided a script that should help you in converting the MLFP dataset as a set of individual samples stored as *.hdf5* file.
The same script can also be used to obtain the annotations providing facial landmarks.
A script to generate the annotations for WMCA dataset has also been provided. These scripts are located in ``bob.paper.nir_patch_pooling.script``--- which you need to run from the corresponding folder.
The annotations for both dataset need to be precomputed (You may use the provided script). This is a one time process.
For both datasets, you need to set the paths of the dataset and corresponding annotation directory in the working environment.
Bob provides a configuration mechanism to set such variables- which can be read by the experiment scripts during execution.
Please use the following 4 commands to configure both datasets::
bob config set "bob.db.wmca_mask.directory" <path-of-WMCA-dataset-location>
bob config set "bob.db.wmca_mask.annotation_directory" <path-of-WMCA-annotation-location>
bob config set "bob.db.mlfp.directory" <path-of-MLFP-dataset-location>
bob config set "bob.db.mlfp.annotation_directory" <path-of-MLFP-annotation-location>
The ``spoof.py`` script from bob should be used to conduct the PAD experiments.
For detailed information related to this script, and check bob documentation. The configuration of data preprocessor, feature extractor, and subsequent classifier is provided in the config directory of the package. The feature extraction is performed using the PatchPooling CNN.
The configurations of datasets, in terms of protocols and groups (subsets), are also provided in the config directory.
If you prefer the default configuration, you only need to provide the <output-directory> to store the results.
To perform PAD experiment on the WMCA dataset (using ``grandtest`` protocol), you should run the command::
where ``<cv-number>`` refers to the protocol corresponding to the specific CV fold/partition. The experimental setup consists of 3 folds numbered cv1, cv2, and cv3.
Note: Whenever you are working with already preprocessed data, you may use the corresponding flags in ``spoof.py`` command to avoid invoking the preprocessing commands.
Evaluating the experiments
-----------------------------------
To evaluate the performance of the PAD experiments, you may use the command ``bob pad metrics``. The command requires the locations of scores files of ``dev`` and ``eval`` sets.
These files are generated by the bob command, spoof.py, described in the previous section.
For default configuration, these files can be found the folder hierarchy: ``<output-directory>/<protocol>/scores/``
To generate the performance metrics, use the following command::
bob pad metrics -v -e <location-of-dev-file> <location-fo-eval-file>
For cross-validation experiments, the performance metrics can be a posteriori calculated on ``dev`` sets.
In this case, provide the location of only dev files to the evaluation command, and do not use *-e* flag.
$ conda install bob.paper.nir_patch_pooling
Contact
-------
--------
--------
For questions or reporting issues to this software package, contact our