Skip to content
Snippets Groups Projects
user avatar
Amir MOHAMMADI authored
ac00b906
History

Doman Guided Pruning of Neural Networks: Application on Face PAD

This package is part of the signal-processing and machine learning toolbox Bob. It contains the instruction to reproduce the following paper:

A. Mohammadi, S. Bhattacharjee, and S. Marcel,
“Domain Adaptation For Generalization Of Face Presentation Attack Detection
In Mobile Settings With Minimal Information,” presented at ICASSP 2020.

Installation

The experiments can only be executed on a Linux 64-bit machine. Install conda and run the steps below:

$ git clone https://gitlab.idiap.ch/bob/bob.paper.icassp2020_domain_guided_pruning.git
$ cd bob.paper.icassp2020_domain_guided_pruning
$ conda create -n pruning --file package-list.txt
$ conda activate pruning
$ pip install --editable .

Preparing the Data

Download the SWAN, WMCA (will be referred as the BATL dataset in code), Replay-Mobile, OULU-NPU, and IJB-C datasets.

Tell Bob where the files are located using the commands below. Make sure to replace the paths with your actual paths before running the commands:

$ bob config set bob.db.oulunpu.directory /path/to/oulunpu/directory
$ bob config set bob.db.replaymobile.directory /path/to/replaymobile/directory
$ bob config set bob.db.swan.directory /path/to/swan/directory
$ bob config set bob.db.batl.directory /path/to/batl/directory

Download the annotations and the models that are used for training with the command below:

$ python ./download_all.py

And, similarly, configure Bob for their locations:

$ bob config set bob.db.oulunpu.annotation_dir "`pwd`/downloads/oulunpu-mtcnn-annotations"
$ bob config set bob.db.replaymobile.annotation_dir "`pwd`/downloads/replaymobile-mtcnn-annotations"
$ bob config set bob.db.swan.annotation_dir "`pwd`/downloads/swan-mtcnn-annotations"
$ bob config set bob.db.batl.annotations_50 "`pwd`/downloads/WMCA_annotations_50_frames"
$ bob config set bob.learn.tensorflow.densenet161 "`pwd`/downloads/densenet-161-imagenet"

Finally, tell Bob where to put large temporary files:

$ bob config set temp /path/to/temporary/folder

Prepare the IJB-C images

In the experiments, we have used a subset of IJB-C images for pruning purposes. These images were selected by hand based on their quality. We provide the list of images in this package, and you may run the following script to prepare those images for the experiments:

$ bob prepare-ijbc-images /path/to/IJB-C_database/IJB/IJB-C/images/img /temp_folder/ijbc-cleaned/faces-224

Note that you must provide the path to the images/img folder from the IJB-C dataset to the script.

Once you do that, you should configure bob for the location of these processed images:

$ bob config set bob.paper.icassp2020_domain_guided_pruning.ijbc_cleaned /temp_folder/ijbc-cleaned/faces-224

Experiments

The experiments are divided into 3 parts:

  1. Preparing data and training the DeepPixBis model on OULU-NPU.
  2. Computing the feature divergences between the OULU-NPU and the other 4 datasets. And, identifying the 20% of the filters to prune.
  3. Re-training the DeepPixBis model on OULU-NPU with some of its filters pruned.

Part 1

Run ./run_part1.sh to generate the list of jobs. Then run:

$ jman --local list --print-dependencies  # inspect the job list
$ jman --local run-scheduler --die-when-finished --parallel 2

You must run the training jobs with at least 2 parallel jobs at a time.

Inspect the ./run_part1.sh file to see how experiments are executed.

Part 2

Compute the feature divergences using the code below:

     $ bob feature-divergence -v \
   -s bob.paper.icassp2020_domain_guided_pruning.oulunpu \
   -t bob.paper.icassp2020_domain_guided_pruning.replaymobile \
   -m bob.paper.icassp2020_domain_guided_pruning.deep_pix_bis_features \
   --output results/features_divergence/oulunpu_vs_replaymobile.npy

$ bob feature-divergence -v \
   -s bob.paper.icassp2020_domain_guided_pruning.oulunpu \
   -t bob.paper.icassp2020_domain_guided_pruning.swan \
   -m bob.paper.icassp2020_domain_guided_pruning.deep_pix_bis_features \
   --output results/features_divergence/oulunpu_vs_swan.npy

$ bob feature-divergence -v \
   -s bob.paper.icassp2020_domain_guided_pruning.oulunpu \
   -t bob.paper.icassp2020_domain_guided_pruning.batl \
   -m bob.paper.icassp2020_domain_guided_pruning.deep_pix_bis_features \
   --output results/features_divergence/oulunpu_vs_batl.npy

$ bob feature-divergence -v \
   -s bob.paper.icassp2020_domain_guided_pruning.oulunpu \
   -t bob.paper.icassp2020_domain_guided_pruning.ijbc \
   -m bob.paper.icassp2020_domain_guided_pruning.deep_pix_bis_features \
   --output results/features_divergence/oulunpu_vs_ijbc.npy

And, then, find the top 20% of filters that contribute most to feature divergences:

$ bob find-filters results/{features_divergence,filters}/oulunpu_vs_replaymobile.npy
$ bob find-filters results/{features_divergence,filters}/oulunpu_vs_swan.npy
$ bob find-filters results/{features_divergence,filters}/oulunpu_vs_batl.npy
$ bob find-filters results/{features_divergence,filters}/oulunpu_vs_ijbc.npy

The commands for this part are also available in ./run_part2.sh which adds the jobs to jman. If you run this script, run the commands below to run them with jman:

$ jman --local run-scheduler --die-when-finished --parallel 2

Part 3

Train the DeepPixBis again with its filters pruned and initial layers frozen:

$ ./run_part3.sh
$ jman --local list --print-dependencies  # inspect the job list
$ jman --local run-scheduler --die-when-finished --parallel 2

You must run the training jobs with at least 2 parallel jobs at a time.

Inspect the ./run_part3.sh file to see how experiments are executed.

Evaluation

Once the experiments are finished, the models can be evaluated using:

$ ./evaluate.sh

If you cannot run the experiments, you may unzip the score files provided in results.tar.xz and run the evaluations on these scores.

Contact

For questions or reporting issues to this software package, contact our development mailing list.