Source code package for Bob's ICML'17 paper
This package contains the source code associated with our contribution to the Reproducibility in Machine Learning Research Workshop, at the International Conference on Machine Learning (2017). A copy of the final paper can be obtained from our publication server.
Installation
The installation instructions are based on conda and works on Linux systems only (some dependencies are currently unavailable for other OSes). Install conda before continuing.
Once you have installed conda, download the source code of this paper and unpack it. Then, you can create a conda environment with the following command:
$ cd bob.paper.icml2017
$ conda env create -f environment.yml
$ source activate bob.paper.icml2017 # activate the environment
$ python -c "import caffe; import bob.bio.base" # test the installation
This will install all the required software to reproduce this paper.
The VGG network
You need to download the pre-trained VGG network to continue. You can download the VGG network using the following commands:
$ wget http://www.robots.ox.ac.uk/~vgg/software/vgg_face/src/vgg_face_caffe.tar.gz
$ tar -xzf vgg_face_caffe.tar.gz
The default VGG network requires some changes to work as expected. Please
either manually remove the last 4 layers in the
vgg_face_caffe/VGG_FACE_deploy.prototxt
or copy the pre-adjusted network
from this directory:
$ cp VGG_FACE_deploy.prototxt vgg_face_caffe/
Database
Experiments will be run using the MOBIO database.
Please download the database from the MOBIO project webpage after signing
the End User License Agreement (EULA).
Afterward, please update the MOBIO_IMAGE_DIRECTORY
and
MOBIO_ANNOTATION_DIRECTORY
in the mobio.py
file to where you have
installed the MOBIO database.
For example, if you have the MOBIO database in the /databases/MOBIO
directory on your machine:
mobio_image_directory = "/databases/MOBIO/IMAGES_PNG"
mobio_annotation_directory = "/databases/MOBIO/IMAGE_ANNOTATIONS"
Experiments
We have two experiments, which will be run using the bob.bio toolchain. Both
are set up using a single configuration file, where all details of the
experiment are stored. Experiments will run on the hand-labeled images of the
MOBIO database, using the male
evaluation protocol.
Deep Neural Networks
The first experiment uses a pre-trained deep neural network from the VGG Face group. This network was trained by the VGG group on millions of images from thousands of identities. To run the experiment, please go to the command line and run:
$ verify.py config_vgg.py
By default, the script will use 8 parallel processes to run the experiment.
You can decrease the number of parallel
processes in config_vgg.py
, or
remove the parallel = 8
line completely.
Inter-Session Variability modeling
The second experiment runs an inter-session variability modeling of DCT block features. Instead of using a pre-trained ISV model, we use the bob.bio toolchain to train the ISV model using the training set of the MOBIO dataset, which contains 9600 images of 50 identities.
To run the experiment, please go to the command line and run:
$ verify_isv.py config_isv.py
By default, the script will use 32 parallel processes to run the experiment. If you have less processes to spare, please reduce the number of parallel processes. Anyways, please expect a runtime of several hours, especially for training the ISV model.
Evaluation
The experiments produced score files. These score files can (by default) be
found in the result
directory. To evaluate the experiments, we use the
evaluate.py
script. For example, to plot the results of the two
experiments into a ROC curve, and compute the HTER on the eval
set, we can
call:
$ evaluate.py \
--directory ./results \
--dev-files VGG/male/nonorm/scores-dev ISV/male/nonorm/scores-dev \
--eval-files VGG/male/nonorm/scores-eval ISV/male/nonorm/scores-eval \
--labels VGG ISV --title "MOBIO male dev" "MOBIO male eval" \
--roc ROC.pdf \
--criterion EER \
-vv
The results of this command line should be the plot of Figure 3(a) that you can find in the paper, and the following table (cf. Figure 3(b) of the paper):
Algorithm | EER | HTER |
---|---|---|
VGG | 6.548% | 5.136% |
ISV | 3.294% | 7.256% |