Skip to content
Snippets Groups Projects
user avatar
Ketan Kotwal authored
a9363a92
History

CNN Patch Pooling for Detecting 3D Mask Presentation Attacks in NIR

This package is a part of the Bob toolkit and it allows to reproduce the experimental results in the following paper from the proceedings of IEEE ICIP 2020:

@inproceedings{CNN-PATCH-POOLING-ICIP-2020,
    title = {{CNN Patch Pooling for Detecting 3D Mask Presentation Attacks in NIR}},
    author = {{K. Kotwal and S. Marcel}},
    booktitle = {{Proceedings of IEEE International Conference on Image Processing}},
    pages = {000--000},
    month = {{0}},
    year = {2020},
}

If you use this package and/or its results, please cite the paper.

Installation

The installation instructions are based on conda and works on Linux systems only. Install conda before continuing.

Once you have installed conda, download the source code of this paper and unpack it. Then, you can create a conda environment with the following command:

$ cd bob.paper.nir_patch_pooling
$ conda env create -f environment.yml
$ conda activate patch_pooling
$ buildout

This will install all the required software to reproduce this paper.

Pre-requisites and Setting-up the experiments

Downloading the dataset

The experiments described in this paper are based on 2 NIR datasets for face PAD.

The dataset WMCA used in this study should be downloaded from Idiap's server. The metadata used for WMCA dataset should also be downloaded from Idiap. The experiments related to the present work require only the subset captured in NIR.

The second dataset, MLFP, should be downloaded from www.iab-rubric.org by contacting its owners. The experiments in this paper require only the subset captured in NIR.

The MLFP dataset may be available in different data structures or files than the ones compatible to the present code. We have provided a script that should help you in converting the MLFP dataset as a set of individual samples stored as .hdf5 file. The same script can also be used to obtain the annotations providing facial landmarks. A script to generate the annotations for WMCA dataset has also been provided. These scripts are located in bob.paper.nir_patch_pooling.script--- which you need to run from the corresponding folder. The annotations for both dataset need to be precomputed (You may use the provided script). This is a one time process.

For both datasets, you need to set the paths of the dataset and corresponding annotation directory in the working environment. Bob provides a configuration mechanism to set such variables- which can be read by the experiment scripts during execution. Please use the following 4 commands to configure both datasets:

bob config set "bob.db.wmca_mask.directory" <path-of-WMCA-dataset-location>
bob config set "bob.db.wmca_mask.annotation_directory" <path-of-WMCA-annotation-location>

bob config set "bob.db.mlfp.directory" <path-of-MLFP-dataset-location>
bob config set "bob.db.mlfp.annotation_directory" <path-of-MLFP-annotation-location>

Downloading the face recognition CNN model

A pre-trained face recognition (FR) model of LightCNN-9 can be downloaded from here, or its own website. The location of this model should be set using the configuration mechanism of bob using the following command:

bob config set "lightcnn9.model_directory" <path-of-the-model-directory>

Only the directory should be specified. Do not include the model name.

Executing Experiments for Detection of 3D Mask Attacks

Running the experiments

The spoof.py script from bob should be used to conduct the PAD experiments. For detailed information related to this script, and check bob documentation. The configuration of data preprocessor, feature extractor, and subsequent classifier is provided in the config directory of the package. The feature extraction is performed using the PatchPooling CNN. The configurations of datasets, in terms of protocols and groups (subsets), are also provided in the config directory.

If you prefer the default configuration, you only need to provide the <output-directory> to store the results. To perform PAD experiment on the WMCA dataset (using grandtest protocol), you should run the command:

spoof.py wmca_mask.py patch_pooling_lr.py -s <output-directory> -vv

To perform PAD experiments for different cross validation (CV) protocols of the MLFP dataset, you should run the commands:

spoof.py mlfp.py patch_pooling_lr.py --protocol cv<number> -s <output-directory> -vv

where <cv-number> refers to the protocol corresponding to the specific CV fold/partition. The experimental setup consists of 3 folds numbered cv1, cv2, and cv3.

Note: Whenever you are working with already preprocessed data, you may use the corresponding flags in spoof.py command to avoid invoking the preprocessing commands.

Evaluating the experiments

To evaluate the performance of the PAD experiments, you may use the command bob pad metrics. The command requires the locations of scores files of dev and eval sets. These files are generated by the bob command, spoof.py, described in the previous section. For default configuration, these files can be found the folder hierarchy: <output-directory>/<protocol>/scores/

To generate the performance metrics, use the following command:

bob pad metrics -v -e <location-of-dev-file> <location-fo-eval-file>

For cross-validation experiments, the performance metrics can be a posteriori calculated on dev sets. In this case, provide the location of only dev files to the evaluation command, and do not use -e flag.

Contact


For questions or reporting issues to this software package, contact our development mailing list.