Commit 4aef28ea authored by Ketan Kotwal's avatar Ketan Kotwal
Browse files

Initial commit.

parents
This diff is collapsed.
include README.rst bootstrap-buildout.py buildout.cfg COPYING version.txt requirements.txt
recursive-include doc *.py *.rst *.ico *.png
recursive-include bob/paper/facecsmad/database/lists/ *.lst
.. vim: set fileencoding=utf-8 :
.. image:: http://img.shields.io/badge/docs-stable-yellow.svg
:target: https://www.idiap.ch/software/bob/docs/bob/bob.paper.facecsmad/master/index.html
.. image:: http://img.shields.io/badge/docs-latest-orange.svg
:target: https://www.idiap.ch/software/bob/docs/bob/bob.paper.facecsmad/master/index.html
.. image:: https://gitlab.idiap.ch/bob/bob.paper.facecsmad/badges/master/build.svg
:target: https://gitlab.idiap.ch/bob/bob.paper.facecsmad/commits/master
.. image:: https://gitlab.idiap.ch/bob/bob.paper.facecsmad/badges/master/coverage.svg
:target: https://gitlab.idiap.ch/bob/bob.paper.facecsmad/commits/master
.. image:: https://img.shields.io/badge/gitlab-project-0000c0.svg
:target: https://gitlab.idiap.ch/bob/bob.paper.facecsmad
.. image:: http://img.shields.io/pypi/v/bob.paper.facecsmad.svg
:target: https://pypi.python.org/pypi/bob.paper.facecsmad
============================================
Face PAD for Silicone mask-based attack detection
============================================
This package is part of the signal-processing and machine learning toolbox [Bob](https://www.idiap.ch/software/bob). It contains the source code to reproduce the following paper::
*TBIOM2018*
`<paper-details>`
If you use this package and/or its results, please cite the paper.
Installation
------------
The installation instructions are based on [conda](https://conda.io/) and works on **Linux systems
only**. Install conda before continuing.
Once you have installed [conda](https://conda.io/), download the source code of this paper and
unpack it. Then, you can create a conda environment with the following
command:
``
$ cd bob.paper.facecsmad
$ conda env create -f environment.yml
$ conda activate bob.paper.facecsmad
$ buildout
``
This will install all the required software to reproduce this paper.
Optionally, the package can be installed in the environment directory by:
``
$ python setup.py install
``
Downloading the dataset
------------------------
The dataset **XCSMAD** used in this study should be downloaded.
(Alternatively, if you have **BATL** dataset, then it can be readily used since XCSMAD is a subset of BATL.)
Upon downloading, you need to set the path to the database in the configuration file. Bob supports a configuration file (``~/.bob_bio_databases.txt``) in your home directory to specify where the
databases are located. Please specify the paths for the database like below (by editing the file manually) :
``
$ cat ~/.bob_bio_databases.txt
[YOUR_BATL_DB_DIRECTORY] = <path-of-dataset-location>
``
The metadata used for XCSMAD (or its underlying BATL system) should be downloaded from Idiap using the command:
``
$ bin/bob_dbmanage.py batl download
``
Downloading the face recognition CNN model
---------------------------------------
Pre-trained face recognition (FR) model of `LightCNN-9` can be downloaded from the
[this](github.com/AlfredXiangWu/LightCNN) location, or its own website.
The path of the model should be mentioned in the `MODEL_FILE` parameter of the following two files:
1. `<package>/config/cnn_lr.py`
2. `<package>/script/process_pics.py`
## Setting up annotations directory
---------------------------------------
For each of the *`xcsmad-`* database configurations in `<package>/config`, specify the location for annotations directory. It **must** be same for all channels.
If the annotations are pre-computed, provide the same path. Otherwise, the annotations will be computed for
the first time, and these will be re-used later.
## Generating set of commands
---------------------------------------
The complete setup runs 20 experiments on individual single channels (4 channels *X* 5 experiments); 3 experiments of feature fusion and score fusion, each; multiple cross-validation experiments; and
vulnerability analysis. To facilitate quick running and evaluation of experiments, a simple script is provided to programmatically generate all commands.
You can specify the `base directory` where all the results should be stored, and few other parameters in `config.ini` in the present folder. Run the python script `generate_commands.py` in this folder.
As a result, a new text file `commands.txt` will be generated in the same folder which consists of necessary commands. The commands are divided into 5 sections: (1) single channel PAD,
(2) feature fusion, (3) score fusion, (4) cross-validation (for CNN+LR method on VIS data), and (5) vulnerability analysis.
## Running the experiments
---------------------------------------
1. Single channel PAD: Directly run the command ``spoof.py`` with necessary parameters as generated in the ``commands.txt``. It is advisable to first run the PCA+LDA experiment for any channel. The preprocessed data will be re-used for other experiments of the same channel. If you wish to re-compute the preprocessed data for each experiment, edit the settings from config.ini.
2. Feature fusion: The commands assume that extracted features from constituent channels are precomputed (and stored in folders defined in commands.txt). Commands include script to fuse features from
``script`` folder, followed by training classifiers, and their evaluation.
3. Score fusion: The commands assume that scores from constituent channels are precomputed (and stored in folders defined in commands.txt). Commands include script to fuse scores, followed by training classifiers, and their evaluation.
4. Cross validation: The set of commands in the commands.txt first computes the extracted features at once. Then it provides commands to train and evaluate the PAD methods for 5 folds.
5. Vulnerability Analysis: **TBD**
Contact
-------
For questions or reporting issues to this software package, contact our
development `mailing list`_.
.. Place your references here:
.. _bob: https://www.idiap.ch/software/bob
.. _installation: https://www.idiap.ch/software/bob/install
.. _mailing list: https://www.idiap.ch/software/bob/discuss
.. _bob package development: https://www.idiap.ch/software/bob/docs/bob/bob.extension/master/
.. _conda: https://conda.io
.. _install conda: https://conda.io/docs/install/quick.html#linux-miniconda-install
# see https://docs.python.org/3/library/pkgutil.html
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
# see https://docs.python.org/3/library/pkgutil.html
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
from . import script, database, config
def get_config():
"""Returns a string containing the configuration information.
"""
import bob.extension
return bob.extension.get_config(__name__)
# gets sphinx autodoc done right - don't remove it
__all__ = [_ for _ in dir() if not _.startswith('_')]
from .pad_lda import PadLDA
# gets sphinx autodoc done right - don't remove it
def __appropriate__(*args):
"""Says object was actually declared here, an not on the import module.
Parameters:
*args: An iterable of objects to modify
Resolves `Sphinx referencing issues
<https://github.com/sphinx-doc/sphinx/issues/3048>`
"""
for obj in args: obj.__module__ = __name__
__appropriate__(
PadLDA,
)
__all__ = [_ for _ in dir() if not _.startswith('_')]
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Implementation of PCA+LDA Algorithm for Video-PAD experiment.
@author: Ketan Kotwal
"""
# imports
import numpy as np
from bob.bio.base.algorithm import LDA
from bob.bio.video.utils import FrameContainer
from bob.pad.base.utils import convert_frame_cont_to_array, convert_list_of_frame_cont_to_array
import logging
logger = logging.getLogger(__name__)
#----------------------------------------------------------
class PadLDA(LDA):
"""
The class implements wrapper for video-based PAD using LDA
defined in bob.bio.base.algorithm.
"""
def __init__(self,
pca_subspace_dimension=0.80,
use_pinv = False,
one_class_pca = True,
**kwargs):
"""
Init function
Parameters
----------
pca_subspace_dimension : int or float
The dimension of the PCA subspace to be applied
before on the data. If provided as a float in the range [0, 1],
it retains the number of eigenvectors where the cummulative variance
reaches the provided fraction of total training data.
use_pinv : bool
Use the pseudo-inverse in LDA computation.
one_class_pca: bool
If set to True, only the real class data will be used to obtain PCA.
lda_subspace_dimension : is not required
In the PAD case, it should always correspond to 2; and
should be internally deduced by the algorithm.
"""
self.pca_subspace_dimension = pca_subspace_dimension
self.one_class_pca = one_class_pca
super(PadLDA, self).__init__(
pca_subspace_dimension = pca_subspace_dimension,
use_pinv = False, **kwargs)
#----------------------------------------------------------
def train_projector(self, training_features, projector_file):
# convert the frame containers into numpy arrays. two sets of features
# real and attack should be prepared
training_features_formatted = []
if len(training_features) != 2:
raise ValueError("The given feature set is expected to have 2 classes, but it has %d" % (len(training_features)))
logger.info("Training projector with %d real and %d attack files", len(training_features[0]), len(training_features[1]))
for feature_list in training_features:
tmp_feature_list = convert_list_of_frame_cont_to_array(feature_list)
training_features_formatted.append(tmp_feature_list.astype(np.float64))
super(PadLDA, self).train_projector(training_features_formatted, projector_file)
#----------------------------------------------------------
def project(self, feature):
# convert frame container into numpy array
if not (isinstance(feature, FrameContainer) or isinstance(feature, numpy.ndarray)):
raise ValueError("The given feature is not appropriate")
if isinstance(feature, FrameContainer):
features_array = convert_frame_cont_to_array(feature)
else:
features_array = [feature]
features_array = features_array.astype(np.float64)
projected = [super(PadLDA, self).project(x) for x in features_array]
return projected
#----------------------------------------------------------
def score(self, toscore):
return 1.0*toscore[:, 0]
#----------------------------------------------------------
def score_for_multiple_projections(self, toscore):
return self.score(toscore)
#----------------------------------------------------------
def _train_pca(self, training_set):
if self.one_class_pca:
training_set = [training_set[0]]
logger.info("Obtaining PCA projections only from real class")
else:
logger.info("Obtaining PCA projections from both real and attack classes")
# the stacking of data is not needed, since parent class handles the same.
machine = super(PadLDA, self)._train_pca(training_set)
return machine
#----------------------------------------------------------
def read_toscore_object(self, toscore_object_file):
# for compatiblity
return self.read_feature(toscore_object_file)
#----------------------------------------------------------
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This file contains configurations to run CNN + LR classifier for Face PAD on
XCSMAD. The preprocessor is not included here, and should be handled outside
this configuration.
"""
#---
# sub-directory where results will be placed
sub_directory = "cnn_lr"
#---
# define extractor:
from bob.paper.facecsmad.extractor import LightCNNExtractor
from bob.bio.video.extractor import Wrapper
MODEL_FILE = "/idiap/temp/kkotwal/jan08/color/models1/LightCNN_9Layers_checkpoint.pth.tar"
#MODEL_FILE = "<Enter the path of your LightCNN-9 model here>"
extractor = Wrapper(LightCNNExtractor(model_file=MODEL_FILE))
#------
# define algorithm
from bob.pad.base.algorithm import LogRegr
C = 1.0 # regularization parameter for the LR classifier
FRAME_LEVEL_SCORES_FLAG = True
algorithm = LogRegr(C=C, frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG)
#=========================
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This file contains configurations to run Feature fusion classification for Face PAD on
XCSMAD. It consists of dummy preprocessor and extractor; and LR classifier (algorithm).
Prior to running this experiment, ensure that the features of data from individual
channels have been fused using ``feature_fusion.py`` included in this package.
The extracted-directory for the present experiment should point to the location
where outputs of feature_fusion are stored.
"""
#----------------------------------------------------------
# sub-directory where results will be placed
sub_directory = "feature_fusion"
#----------------------------------------------------------
# define flags related to feature fusion experiment
skip_preprocessing = True
skip_extractor_training = True
skip_extraction = True
#----------------------------------------------------------
# define dummy preprocessor + extractor:
from bob.bio.video.extractor import Wrapper
from bob.pad.base.test.dummy.preprocessor import DummyPreprocessor
from bob.pad.base.test.dummy.extractor import DummyExtractor
from bob.bio.video.utils import FrameSelector
preprocessor = DummyPreprocessor()
extractor = Wrapper(DummyExtractor(requires_training=False), frame_selector = FrameSelector(selection_style = "all"))
#----------------------------------------------------------
# define algorithm:
from bob.pad.base.algorithm import LogRegr
C = 1 # The regularization parameter for the LR classifier
FRAME_LEVEL_SCORES_FLAG = True # Return one score per frame
algorithm = LogRegr(C=C, frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG)
#----------------------------------------------------------
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This file contains configurations to run IQM + LR classifier for Face PAD on
XCSMAD. The preprocessor is not included here, and should be handled outside
this configuration.
"""
#---
# sub-directory where results will be placed
sub_directory = "iqm_lr"
#---
# define extractor
from bob.paper.facecsmad.extractor import QualityMeasuresGray
from bob.bio.video.extractor import Wrapper
DTYPE = "float64"
extractor = Wrapper(QualityMeasuresGray(input_dtype=DTYPE, output_dtype=DTYPE))
"""
Extractor computes 18 features proposed by Galbally et al.
"""
#----
# define algorithm
from bob.pad.base.algorithm import LogRegr
C = 1. # regularization parameter for the LR classifier
FRAME_LEVEL_SCORES_FLAG = True # Return one score per frame
algorithm = LogRegr(C=C, frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG)
#=========================
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This file contains configurations to run LBP + LR classifier for Face PAD on
XCSMAD. The preprocessor is not included here, and should be handled outside
this configuration.
"""
#---
# sub-directory where results will be placed
sub_directory = "lbp_lr"
#---
# define extractor:
from bob.pad.face.extractor import LBPHistogram
from bob.bio.video.extractor import Wrapper
LBPTYPE = "uniform"
ELBPTYPE = "regular"
RAD = 1
NEIGHBORS = 8
CIRC = False
DTYPE = None
extractor = Wrapper(LBPHistogram(
lbptype=LBPTYPE,
elbptype=ELBPTYPE,
rad=RAD,
neighbors=NEIGHBORS,
circ=CIRC,
dtype=DTYPE))
#----
# define algorithm
from bob.pad.base.algorithm import LogRegr
C = 1 # regularization parameter for the LR classifier
FRAME_LEVEL_SCORES_FLAG = True # Return one score per frame
algorithm = LogRegr(C=C, frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG)
#=========================
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This file contains configurations to run LBP + SVM classifier for Face PAD on
XCSMAD. The preprocessor is not included here, and should be handled outside
this configuration.
"""
#---
# sub-directory where results will be placed
sub_directory = "lbp_svm"
#---
# define extractor:
from bob.bio.video.utils import FrameSelector
from bob.pad.face.extractor import LBPHistogram
from bob.bio.video.extractor import Wrapper
LBPTYPE = "uniform"
ELBPTYPE = "regular"
RAD = 1
NEIGHBORS = 8
CIRC = False
_frame_selector = FrameSelector(selection_style = "all")
extractor = Wrapper(LBPHistogram(
lbptype=LBPTYPE,
elbptype=ELBPTYPE,
rad=RAD,
neighbors=NEIGHBORS,
circ=CIRC,
dtype=None))
#----
# define algorithm
from bob.pad.base.algorithm import SVM
MACHINE_TYPE = "C_SVC"
KERNEL_TYPE = "RBF"
N_SAMPLES = 10000
TRAINER_GRID_SEARCH_PARAMS = {
'cost': [2**P for P in range(-3, 14, 2)],
'gamma': [2**P for P in range(-15, 0, 2)]
}
MEAN_STD_NORM_FLAG = True # enable mean-std normalization
FRAME_LEVEL_SCORES_FLAG = True
REDUCED_TRAIN_DATA_FLAG=False
algorithm = SVM(
machine_type=MACHINE_TYPE,
kernel_type=KERNEL_TYPE,
n_samples=N_SAMPLES,
trainer_grid_search_params=TRAINER_GRID_SEARCH_PARAMS,
mean_std_norm_flag=MEAN_STD_NORM_FLAG,
reduced_train_data_flag=REDUCED_TRAIN_DATA_FLAG,
frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG)
"""
The SVM algorithm with RBF kernel is used to classify the data into *real* and *attack* classes.
The data is also mean-std normalized, ``mean_std_norm_flag = True``.
"""
#-------------
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This file contains configurations to run PCA + LDA classifier for Face PAD on
XCSMAD. The preprocessor is not included here, and should be handled outside
this configuration.
"""
#----------------------------------------------------------
# sub-directory where results will be placed
sub_directory = "pca_lda"
#----------------------------------------------------------
# define extractor:
from bob.bio.base.extractor import Linearize
from bob.bio.video.extractor import Wrapper
extractor = Wrapper(Linearize(), FrameSelector(selection_style = "all"))
"""
This extraction simply linearizes the image into a 1-D column array.
"""
#----------------------------------------------------------
# define algorithm:
from bob.paper.facecsmad.algorithm import PadLDA
_pca_subspace_dimension = 0.80
algorithm = PadLDA(pca_subspace_dimension=_pca_subspace_dimension)
"""
The algorithm for LDA has in-built PCA computation where dimensionality of the PCA subspace can be specified.
"""
#----------------------------------------------------------
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Configuration for ``preprocessor`` for PAD experiments on VIS channel of XCSMAD.