Commit 76da662e authored by Ketan Kotwal's avatar Ketan Kotwal

Package for Makeup detection in AIM.

parents
*~
*.swp
*.pyc
bin
eggs
parts
.installed.cfg
.mr.developer.cfg
*.egg-info
src
develop-eggs
sphinx
dist
This diff is collapsed.
include README.rst bootstrap-buildout.py buildout.cfg COPYING version.txt requirements.txt
recursive-include doc *.py *.rst *.ico *.png
recursive-include bob/paper/makeup_aim/database/lists/ *.lst
.. vim: set fileencoding=utf-8 :
.. image:: http://img.shields.io/badge/docs-stable-yellow.svg
:target: https://www.idiap.ch/software/bob/docs/bob/bob.paper.makeup_aim/master/index.html
.. image:: http://img.shields.io/badge/docs-latest-orange.svg
:target: https://www.idiap.ch/software/bob/docs/bob/bob.paper.makeup_aim/master/index.html
.. image:: https://gitlab.idiap.ch/bob/bob.paper.makeup_aim/badges/master/build.svg
:target: https://gitlab.idiap.ch/bob/bob.paper.makeup_aim/commits/master
.. image:: https://gitlab.idiap.ch/bob/bob.paper.makeup_aim/badges/master/coverage.svg
:target: https://gitlab.idiap.ch/bob/bob.paper.makeup_aim/commits/master
.. image:: https://img.shields.io/badge/gitlab-project-0000c0.svg
:target: https://gitlab.idiap.ch/bob/bob.paper.makeup_aim
.. image:: http://img.shields.io/pypi/v/bob.paper.makeup_aim.svg
:target: https://pypi.python.org/pypi/bob.paper.makeup_aim
============================================
Detection of Age-Induced Makeup Attacks on Face Recognition Systems Using Multi-Layer Deep Features
============================================
This package is part of the signal-processing and machine learning toolbox [Bob](https://www.idiap.ch/software/bob). It contains the source code to reproduce the following paper::
*TBIOM2019*
"<paper-details>"
If you use this package and/or its results, please cite the paper.
Installation
------------
The installation instructions are based on [conda](https://conda.io/) and works on **Linux systems
only**. Install conda before continuing.
Once you have installed [conda](https://conda.io/), download the source code of this paper and
unpack it. Then, you can create a conda environment with the following
command::
$ cd bob.paper.makeup_aim
$ conda env create -f environment.yml
$ conda activate bob.paper.makeup_aim
$ buildout
This will install all the required software to reproduce this paper.
Optionally, the package can be installed in the environment directory by::
$ python setup.py install
Downloading the dataset
------------------------
The experiments described in this paper are based on 4 makeup datasets.
The first three datasets: **YMU**, **MIW**, and **MIFS** should be downloaded from
http://www.antitza.com/makeup-datasets.html by contacting their owners.
These datasets may be provided in different data structures or files. We have provided a script for each dataset that
should help you in converting these datasets as a set of individual samples stored as *.hdf5* file. This process will convert
them into compatible formats. These scripts are located in ``bob.paper.makeup_aim.misc``--- which you need to run from corresponding folder.
For each script, the command should be specified as::
$python generate_<db-name>_db.py <original-data-directory> <output-directory>
The formatted dataset will be stored in the ``output-directory``.
The dataset **AIM** used in this study should be downloaded from Idiap's server.
For all 4 datasets, you need to set the path to the dataset in the configuration file. Bob supports a configuration file (``~/.bob_bio_databases.txt``) in your home directory to specify where the
databases are located. Please specify the paths for the database like below (by editing the file manually) for datasets: AIM, YMU, MIW, and MIFS::
$ cat ~/.bob_bio_databases.txt
[<dataset-name-in-caps>_DIRECTORY] = <path-of-dataset-location>
The metadata used for AIM (or its underlying BATL system) should be downloaded from Idiap using the command::
$ bin/bob_dbmanage.py batl download
Downloading the face recognition CNN model
---------------------------------------
Pre-trained face recognition (FR) model of ``LightCNN-9`` can be downloaded from the
[this](github.com/AlfredXiangWu/LightCNN) location, or its own website.
The location of this model should be stored in ``.bobrc`` file in your $HOME directory in a json (key:value) format as follows::
{
"LIGHTCNN9_MODEL_DIRECTORY": <path-of-the-directory>
}
Only the directory should be specified. Do *not* include the model name.
Setting up annotation directories
---------------------------------------
You should specify the annotation directory for each dataset in configuration file (``~/.bob_bio_databases.txt``).
To generate annotations for YMU, MIW, and MIFS datasets, use the script ``annotate_db.py`` provided in this package.
The images in YMU and MIW datasets have already been cropped to the face region, and hence, the face detector used in our work
is sometimes unable to localize various landmarks in the face (required for subsequent alignment). Therefore, it is a good idea
to pad the face image before detection of facial landmarks. You should provide this padding width as a parameter to the ``annotate_db.py`` script.
The padding is temporary. It does not alter the stored images in dataset. Also, the annotations are modified to eliminate the effect of padding.
The command has following syntax::
$ python bin/annotate_db.py <dataset-directory> <annotation-directory> <padding-width>
Here, the ``dataset-directory`` is same as the directory where generated datasets have been stored.
The annotation directory will contain the details of annotations. This path of directory, for each dataset, should be stored in
the configuration file (``~/.bob_bio_databases.txt``) similar to previous step. The entries should have a following format::
$ cat ~/.bob_bio_databases.txt
[<dataset-name-in-caps>_ANNOTATION_DIRECTORY] = <path-of-annotation-directory>
For the experiments conducted in this work, ``padding-with`` was set to 25, 25, and 0 for YMU, MIW, and MIFS, respectively.
You do not need to compute annotations for AIM dataset. Just setup the annotation directory in configuration file.
The annotations will be computed and stored when the experiment is executed for the first time. These annotations will be
re-used for subsequent runs.
Generating set of commands
---------------------------------------
The complete setup has 5 experiments of makeup detection. To facilitate quick running and evaluation of experiments, a simple script is provided to programmatically generate all commands.
You can specify the ``base directory`` where all the results should be stored, and few other parameters in ``config.ini`` in the present folder.
Run the python script ``generate_commands.py`` in the same folder.
As a result, a new text file ``commands.txt`` will be generated in the same folder which consists of necessary commands. The commands are divided into 5 sections: (1) TBD: Vulnerability,
(2) AIM PAD, (3) YMU Cross-validation, (4) Cross-dataset (Training on YMU), and (5) Cross-dataset (Training on MIFS)
Running the experiments
---------------------------------------
Run commands from ``commands.txt`` to execute experiments, and also to evaluate and plot their results.
Contact
-------
For questions or reporting issues to this software package, contact our
development `mailing list`_.
.. Place your references here:
.. _bob: https://www.idiap.ch/software/bob
.. _installation: https://www.idiap.ch/software/bob/install
.. _mailing list: https://www.idiap.ch/software/bob/discuss
.. _bob package development: https://www.idiap.ch/software/bob/docs/bob/bob.extension/master/
.. _conda: https://conda.io
.. _install conda: https://conda.io/docs/install/quick.html#linux-miniconda-install
# see https://docs.python.org/3/library/pkgutil.html
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
# see https://docs.python.org/3/library/pkgutil.html
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
from . import script, database, config
def get_config():
"""Returns a string containing the configuration information.
"""
import bob.extension
return bob.extension.get_config(__name__)
# gets sphinx autodoc done right - don't remove it
__all__ = [_ for _ in dir() if not _.startswith('_')]
#!/usr/bin/env python
"""
AIM database for makeup detection. Default configuration for grandtest protocol
"""
from bob.paper.makeup_aim.database import AIMDatabase
ORIGINAL_DIRECTORY = "[AIM_DIRECTORY]"
ORIGINAL_EXTENSION = ".h5"
ANNOTATION_DIRECTORY = "[AIM_ANNOTATION_DIRECTORY]"
PROTOCOL = "grandtest"
database = AIMDatabase(
protocol=PROTOCOL,
original_directory=ORIGINAL_DIRECTORY,
original_extension=ORIGINAL_EXTENSION,
annotation_directory=ANNOTATION_DIRECTORY,
training_depends_on_protocol=True,
)
groups = ["train", "dev", "eval"]
#----------------------------------------------------------
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This file contains configurations to run KAD method for Face PAD
toward detection of makeups.
"""
#---
# sub-directory where results will be placed
sub_directory = "expt_kad"
#---
# define preprocessor:
from bob.pad.face.preprocessor import FaceCropAlign
from bob.bio.video.preprocessor import Wrapper
from bob.bio.video.utils import FrameSelector
#---
# parameters and constants
FACE_SIZE = 128
RGB_OUTPUT_FLAG = False
USE_FACE_ALIGNMENT = None
MAX_IMAGE_SIZE = None
FACE_DETECTION_METHOD = None
MIN_FACE_SIZE = 50
_image_preprocessor = FaceCropAlign(face_size=FACE_SIZE,
rgb_output_flag=RGB_OUTPUT_FLAG,
use_face_alignment=USE_FACE_ALIGNMENT,
max_image_size=MAX_IMAGE_SIZE,
face_detection_method=FACE_DETECTION_METHOD,
min_face_size=MIN_FACE_SIZE,
)
_frame_selector = FrameSelector(selection_style = "spread")
preprocessor = Wrapper(preprocessor = _image_preprocessor, frame_selector = _frame_selector)
#--------------------------------------
# define extractor:
from bob.paper.makeup.extractor import Makeup_KAD
from bob.bio.video.extractor import Wrapper
extractor = Wrapper(Makeup_KAD())
#--------------------------------------
# define algorithm
from bob.pad.base.algorithm import SVM
MACHINE_TYPE = "C_SVC"
KERNEL_TYPE = "LINEAR"
N_SAMPLES = 10000
TRAINER_GRID_SEARCH_PARAMS = {
'cost': [2**P for P in range(-3, 14, 2)],
'gamma': [2**P for P in range(-15, 0, 2)]
}
MEAN_STD_NORM_FLAG = False # do not normalize
FRAME_LEVEL_SCORES_FLAG = True
REDUCED_TRAIN_DATA_FLAG = False
algorithm = SVM(
machine_type=MACHINE_TYPE,
kernel_type=KERNEL_TYPE,
n_samples=N_SAMPLES,
trainer_grid_search_params=TRAINER_GRID_SEARCH_PARAMS,
mean_std_norm_flag=MEAN_STD_NORM_FLAG,
reduced_train_data_flag=REDUCED_TRAIN_DATA_FLAG,
frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG)
#-------------
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This file contains configurations to run FI-CNN + SVM classifier for Face PAD
toward detection of makeups.
"""
#----------------------------------------------------------
# sub-directory where results will be placed
sub_directory = "ficnn_svm"
#----------------------------------------------------------
# define preprocessor:
from bob.pad.face.preprocessor import FaceCropAlign
from bob.bio.video.preprocessor import Wrapper
from bob.bio.video.utils import FrameSelector
# parameters and constants
FACE_SIZE = 128
RGB_OUTPUT_FLAG = False
USE_FACE_ALIGNMENT = True
ALIGNMENT_TYPE = "lightcnn"
MAX_IMAGE_SIZE = None
FACE_DETECTION_METHOD = None
MIN_FACE_SIZE = 50
_image_preprocessor = FaceCropAlign(face_size=FACE_SIZE,
rgb_output_flag=RGB_OUTPUT_FLAG,
use_face_alignment=USE_FACE_ALIGNMENT,
alignment_type=ALIGNMENT_TYPE,
max_image_size=MAX_IMAGE_SIZE,
face_detection_method=FACE_DETECTION_METHOD,
min_face_size=MIN_FACE_SIZE,
)
_frame_selector = FrameSelector(selection_style = "spread")
preprocessor = Wrapper(preprocessor = _image_preprocessor, frame_selector = _frame_selector)
#----------------------------------------------------------
# define extractor:
from bob.paper.makeup_aim.extractor import FICNN
from bob.bio.video.extractor import Wrapper
from bob.extension import rc
import os
_model_dir = rc.get("LIGHTCNN9_MODEL_DIRECTORY")
_model_name = "LightCNN_9Layers_checkpoint.pth.tar"
_model_file = os.path.join(_model_dir, _model_name)
if not os.path.exists(_model_file):
print("Error: Could not find the LightCNN-9 model at [{}].\nPlease follow the download instructions from README".format(_model_dir))
exit(0)
extractor = Wrapper(FICNN(model_file=_model_file))
#----------------------------------------------------------
# define algorithm
from bob.pad.base.algorithm import SVM
MACHINE_TYPE = "C_SVC"
KERNEL_TYPE = "RBF"
N_SAMPLES = 10000
MEAN_STD_NORM_FLAG = False # do not normalize
FRAME_LEVEL_SCORES_FLAG = True
REDUCED_TRAIN_DATA_FLAG = False
algorithm = SVM(
machine_type=MACHINE_TYPE,
kernel_type=KERNEL_TYPE,
n_samples=N_SAMPLES,
mean_std_norm_flag=MEAN_STD_NORM_FLAG,
reduced_train_data_flag=REDUCED_TRAIN_DATA_FLAG,
frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG)
"""
The SVM algorithm with RBF kernel is used to classify the data into *real* and *attack* classes.
The data is not mean-std normalized, ``mean_std_norm_flag = False``.
"""
#----------------------------------------------------------
#!/usr/bin/env python
"""
MIFS database for makeup detection. Default configuration for grandtest protocol
"""
from bob.paper.makeup_aim.database import MIFSDatabase
ORIGINAL_DIRECTORY = "[MIFS_DIRECTORY]"
ORIGINAL_EXTENSION = ".hdf5"
ANNOTATION_DIRECTORY = "[MIFS_ANNOTATION_DIRECTORY]"
PROTOCOL = "grandtest"
database = MIFSDatabase(
protocol=PROTOCOL,
original_directory=ORIGINAL_DIRECTORY,
original_extension=ORIGINAL_EXTENSION,
training_depends_on_protocol=True,
annotation_directory=ANNOTATION_DIRECTORY,
)
groups = ["train"]
#----------------------------------------------------------
#!/usr/bin/env python
"""
MIW database for makeup detection.
"""
from bob.paper.makeup_aim.database import MIWDatabase
ORIGINAL_DIRECTORY = "[MIW_DIRECTORY]"
ORIGINAL_EXTENSION = ".hdf5"
ANNOTATION_DIRECTORY = "[MIW_ANNOTATION_DIRECTORY]"
PROTOCOL = "grandtest"
database = MIWDatabase(
protocol=PROTOCOL,
original_directory=ORIGINAL_DIRECTORY,
original_extension=ORIGINAL_EXTENSION,
training_depends_on_protocol=True,
annotation_directory=ANNOTATION_DIRECTORY,
)
groups = ["train"]
#----------------------------------------------------------
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This file contains configurations to run P-CNN + SVM classifier for Face PAD
toward detection of makeups.
"""
#----------------------------------------------------------
# sub-directory where results will be placed
sub_directory = "pcnn_svm"
#----------------------------------------------------------
# define preprocessor:
from bob.pad.face.preprocessor import FaceCropAlign
from bob.bio.video.preprocessor import Wrapper
from bob.bio.video.utils import FrameSelector
# parameters and constants
FACE_SIZE = 128
RGB_OUTPUT_FLAG = False
USE_FACE_ALIGNMENT = True
ALIGNMENT_TYPE = "lightcnn"
MAX_IMAGE_SIZE = None
FACE_DETECTION_METHOD = None
MIN_FACE_SIZE = 50
_image_preprocessor = FaceCropAlign(face_size=FACE_SIZE,
rgb_output_flag=RGB_OUTPUT_FLAG,
use_face_alignment=USE_FACE_ALIGNMENT,
alignment_type=ALIGNMENT_TYPE,
max_image_size=MAX_IMAGE_SIZE,
face_detection_method=FACE_DETECTION_METHOD,
min_face_size=MIN_FACE_SIZE,
)
_frame_selector = FrameSelector(selection_style = "spread")
preprocessor = Wrapper(preprocessor = _image_preprocessor, frame_selector = _frame_selector)
#----------------------------------------------------------
# define extractor:
from bob.paper.makeup_aim.extractor import PCNN
from bob.bio.video.extractor import Wrapper
from bob.extension import rc
import os
_model_dir = rc.get("LIGHTCNN9_MODEL_DIRECTORY")
_model_name = "LightCNN_9Layers_checkpoint.pth.tar"
_model_file = os.path.join(_model_dir, _model_name)
if not os.path.exists(_model_file):
print("Error: Could not find the LightCNN-9 model at [{}].\nPlease follow the download instructions from README".format(_model_dir))
exit(0)
extractor = Wrapper(PCNN(model_file=_model_file))
#----------------------------------------------------------
# define algorithm
from bob.pad.base.algorithm import SVM
MACHINE_TYPE = "C_SVC"
KERNEL_TYPE = "RBF"
N_SAMPLES = 10000
MEAN_STD_NORM_FLAG = False # do not normalize
FRAME_LEVEL_SCORES_FLAG = True
REDUCED_TRAIN_DATA_FLAG = False
algorithm = SVM(
machine_type=MACHINE_TYPE,
kernel_type=KERNEL_TYPE,
n_samples=N_SAMPLES,
mean_std_norm_flag=MEAN_STD_NORM_FLAG,
reduced_train_data_flag=REDUCED_TRAIN_DATA_FLAG,
frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG)
"""
The SVM algorithm with RBF kernel is used to classify the data into *real* and *attack* classes.
The data is not mean-std normalized, ``mean_std_norm_flag = False``.
"""
#----------------------------------------------------------
#!/usr/bin/env python
"""
YMU database for makeup detection. Default configuration for cross-validation
protocol. For 'grandtest' protocol, override the protocol and groups from
command line by adding the following options: --protocol 'grandtest' --groups
'train'.
"""
from bob.paper.makeup_aim.database import YMUDatabase
ORIGINAL_DIRECTORY = "[YMU_DIRECTORY]"
ORIGINAL_EXTENSION = ".hdf5"
ANNOTATION_DIRECTORY = "[YMU_ANNOTATION_DIRECTORY]"
PROTOCOL = "cv_p0"
database = YMUDatabase(
protocol=PROTOCOL,
original_directory=ORIGINAL_DIRECTORY,
original_extension=ORIGINAL_EXTENSION,
training_depends_on_protocol=True,
annotation_directory = ANNOTATION_DIRECTORY,
)
groups = ["train", "dev"]
#----------------------------------------------------------
from .ymu import YMUDatabase
from .miw import MIWDatabase
from .aim import AIMDatabase
from .mifs import MIFSDatabase
# gets sphinx autodoc done right - don't remove it
def __appropriate__(*args):
"""Says object was actually declared here, and not in the import module.
Fixing sphinx warnings of not being able to find classes, when path is
shortened. Parameters:
*args: An iterable of objects to modify
Resolves `Sphinx referencing issues
<https://github.com/sphinx-doc/sphinx/issues/3048>`
"""
for obj in args:
obj.__module__ = __name__
__appropriate__(
YMUDatabase,
MIWDatabase,
AIMDatabase,
MIFSDatabase,
)
__all__ = [_ for _ in dir() if not _.startswith('_')]
"""
Implementation of dataset interface of AIM for PAD.
@author: Ketan Kotwal
"""
# Imports
from bob.pad.base.database import FileListPadDatabase, PadFile
from bob.pad.face.database.batl import BatlPadFile
from bob.db.batl.models import VideoFile
from bob.extension import rc
from bob.pad.face.preprocessor.FaceCropAlign import detect_face_landmarks_in_image
import json
import os
import bob.io.base
import pkg_resources
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
#----------------------------------------------------------
class File(VideoFile):
def __init__(self, path, client_id, session_id, presenter_id, type_id, pai_id):
super(File, self).__init__(
path = path,
client_id = client_id,
session_id = session_id,
presenter_id = presenter_id,
type_id = type_id,
pai_id = pai_id)
self.id = path
#----------------------------------------------------------
class AIMDatabase(FileListPadDatabase):
"""
A high level implementation of the Database class for the AIM PAD
database. It is a wrapper over bob.pad.face.database.batl
"""
def __init__(
self,
name="AIM",
original_directory=None,
original_extension=".h5",
protocol="grandtest",
annotation_directory=None,
pad_file_class = BatlPadFile,
low_level_pad_file_class = File,
landmark_detect_method = "mtcnn",
**kwargs
):
"""
**Parameters:**
``original_directory`` : str or None
original directory refers to the location of AIM/ BATL parent directory
``original_extension`` : str or None
extension of original data
``groups`` : str or [str]
The groups for which the clients should be returned.
Usually, groups are one or more elements of ['train', 'dev', 'eval'].
Default: ['train', 'dev', 'eval'].
``protocol`` : str
The protocol for which the clients should be retrieved.
Default: 'grandtest'.
"""
filelists_directory = pkg_resources.resource_filename( __name__, "/lists/aim/")
self.filelists_directory = filelists_directory
# init the parent class using super.
super(AIMDatabase, self).__init__(
filelists_directory = filelists_directory,
name = name,
protocol=protocol,
original_directory = original_directory,
original_extension = original_extension,
pad_file_class = low_level_pad_file_class,
annotation_directory = annotation_directory,
**kwargs)
self.low_level_pad_file_class = low_level_pad_file_class
self.pad_file_class = pad_file_class
self.annotation_directory = annotation_directory
self.landmark_detect_method = landmark_detect_method
self.protocol = protocol
logger.info("Dataset: {}".format(self.name))
logger.info("Original directory: {}; Annotation directory: {}".format(self.original_directory, self.annotation_directory))
#----------------------------------------------------------
# override the _make_pad function in bob.pad.base since we want the PAD