Commit 223e1f48 authored by Ketan Kotwal's avatar Ketan Kotwal

Generated documentation

parent 761e113d
......@@ -18,21 +18,30 @@
Detection of Age-Induced Makeup Attacks on Face Recognition Systems Using Multi-Layer Deep Features
============================================
This package is part of the signal-processing and machine learning toolbox [Bob](https://www.idiap.ch/software/bob). It contains the source code to reproduce the following paper::
*TBIOM2019*
"<paper-details>"
This package is part of the Bob_ toolkit and it allows to reproduce the experimental results in the following *submitted* paper::
@article{MAKEUP-AIM-TBIOM-2019,
title = {{Detection of Age-Induced Makeup Attacks on Face Recognition Systems Using Multi-Layer Deep Features}},
author = {{K. Kotwal, Z. Mostaani, and S. Marcel}},
journal = {{IEEE Transactions on Biometrics, Behavior, and Identity Science}},
volume = {00},
number = {0},
pages = {000--000},
month = {{0}},
year = {2019},
}
If you use this package and/or its results, please cite the paper.
Installation
------------
The installation instructions are based on [conda](https://conda.io/) and works on **Linux systems
----------------------------------
The installation instructions are based on conda_ and works on **Linux systems
only**. Install conda before continuing.
Once you have installed [conda](https://conda.io/), download the source code of this paper and
unpack it. Then, you can create a conda environment with the following
Once you have installed conda_, download the source code of this paper and
unpack it. Then, you can create a conda environment with the following
command::
$ cd bob.paper.makeup_aim
......@@ -41,94 +50,24 @@ command::
$ buildout
This will install all the required software to reproduce this paper.
Optionally, the package can be installed in the environment directory by::
$ python setup.py install
Downloading the dataset
------------------------
The experiments described in this paper are based on 4 makeup datasets.
The first three datasets: **YMU**, **MIW**, and **MIFS** should be downloaded from
http://www.antitza.com/makeup-datasets.html by contacting their owners.
These datasets may be provided in different data structures or files. We have provided a script for each dataset that
should help you in converting these datasets as a set of individual samples stored as *.hdf5* file. This process will convert
them into compatible formats. These scripts are located in ``bob.paper.makeup_aim.misc``--- which you need to run from corresponding folder.
For each script, the command should be specified as::
$python generate_<db-name>_db.py <original-data-directory> <output-directory>
The formatted dataset will be stored in the ``output-directory``.
The dataset **AIM** used in this study should be downloaded from Idiap's server.
For all 4 datasets, you need to set the path to the dataset in the configuration file. Bob supports a configuration file (``~/.bob_bio_databases.txt``) in your home directory to specify where the
databases are located. Please specify the paths for the database like below (by editing the file manually) for datasets: AIM, YMU, MIW, and MIFS::
$ cat ~/.bob_bio_databases.txt
[<dataset-name-in-caps>_DIRECTORY] = <path-of-dataset-location>
The metadata used for AIM (or its underlying BATL system) should be downloaded from Idiap using the command::
$ bin/bob_dbmanage.py batl download
Downloading the face recognition CNN model
---------------------------------------
Pre-trained face recognition (FR) model of ``LightCNN-9`` can be downloaded from the
[this](github.com/AlfredXiangWu/LightCNN) location, or its own website.
The location of this model should be stored in ``.bobrc`` file in your $HOME directory in a json (key:value) format as follows::
{
"LIGHTCNN9_MODEL_DIRECTORY": <path-of-the-directory>
}
Only the directory should be specified. Do *not* include the model name.
Setting up annotation directories
---------------------------------------
You should specify the annotation directory for each dataset in configuration file (``~/.bob_bio_databases.txt``).
To generate annotations for YMU, MIW, and MIFS datasets, use the script ``annotate_db.py`` provided in this package.
The images in YMU and MIW datasets have already been cropped to the face region, and hence, the face detector used in our work
is sometimes unable to localize various landmarks in the face (required for subsequent alignment). Therefore, it is a good idea
to pad the face image before detection of facial landmarks. You should provide this padding width as a parameter to the ``annotate_db.py`` script.
The padding is temporary. It does not alter the stored images in dataset. Also, the annotations are modified to eliminate the effect of padding.
The command has following syntax::
$ python bin/annotate_db.py <dataset-directory> <annotation-directory> <padding-width>
Here, the ``dataset-directory`` is same as the directory where generated datasets have been stored.
The annotation directory will contain the details of annotations. This path of directory, for each dataset, should be stored in
the configuration file (``~/.bob_bio_databases.txt``) similar to previous step. The entries should have a following format::
$ cat ~/.bob_bio_databases.txt
[<dataset-name-in-caps>_ANNOTATION_DIRECTORY] = <path-of-annotation-directory>
For the experiments conducted in this work, ``padding-with`` was set to 25, 25, and 0 for YMU, MIW, and MIFS, respectively.
You do not need to compute annotations for AIM dataset. Just setup the annotation directory in configuration file.
The annotations will be computed and stored when the experiment is executed for the first time. These annotations will be
re-used for subsequent runs.
Generating set of commands
---------------------------------------
The complete setup has 5 experiments of makeup detection. To facilitate quick running and evaluation of experiments, a simple script is provided to programmatically generate all commands.
You can specify the ``base directory`` where all the results should be stored, and few other parameters in ``config.ini`` in the present folder.
Run the python script ``generate_commands.py`` in the same folder.
As a result, a new text file ``commands.txt`` will be generated in the same folder which consists of necessary commands. The commands are divided into 5 sections: (1) TBD: Vulnerability,
(2) AIM PAD, (3) YMU Cross-validation, (4) Cross-dataset (Training on YMU), and (5) Cross-dataset (Training on MIFS)
Documentation
------------------
Running the experiments
---------------------------------------
Run commands from ``commands.txt`` to execute experiments, and also to evaluate and plot their results.
For further documentation on this package, please read the `Documentation <https://www.idiap.ch/software/bob/docs/bob/bob.paper.makeup_aim/stable/index.html>`_.
For a list of tutorials on this or the other packages of Bob_, or information on submitting issues, asking questions and starting discussions, please visit its website.
Contact
-------
----------------
For questions or reporting issues to this software package, contact our
development `mailing list`_.
.. Place your references here:
.. _bob: https://www.idiap.ch/software/bob
.. _Bob: https://www.idiap.ch/software/bob
.. _installation: https://www.idiap.ch/software/bob/install
.. _mailing list: https://www.idiap.ch/software/bob/discuss
.. _bob package development: https://www.idiap.ch/software/bob/docs/bob/bob.extension/master/
......
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
import os
import sys
import glob
import pkg_resources
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.3'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.ifconfig',
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.doctest',
'sphinx.ext.graphviz',
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon',
'sphinx.ext.viewcode',
#'matplotlib.sphinxext.plot_directive'
]
import sphinx
if sphinx.__version__ >= "1.4.1":
extensions.append('sphinx.ext.imgmath')
imgmath_image_format = 'svg'
else:
extensions.append('sphinx.ext.pngmath')
# Be picky about warnings
nitpicky = True
# Ignores stuff we can't easily resolve on other project's sphinx manuals
nitpick_ignore = []
# Allows the user to override warnings from a separate file
if os.path.exists('nitpick-exceptions.txt'):
for line in open('nitpick-exceptions.txt'):
if line.strip() == "" or line.startswith("#"):
continue
dtype, target = line.split(None, 1)
target = target.strip()
try: # python 2.x
target = unicode(target)
except NameError:
pass
nitpick_ignore.append((dtype, target))
# Always includes todos
todo_include_todos = True
# Generates auto-summary automatically
autosummary_generate = True
# Create numbers on figures with captions
numfig = True
# If we are on OSX, the 'dvipng' path maybe different
dvipng_osx = '/opt/local/libexec/texlive/binaries/dvipng'
if os.path.exists(dvipng_osx): pngmath_dvipng = dvipng_osx
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'bob.paper.makeup_aim'
import time
copyright = u'%s, Idiap Research Institute' % time.strftime('%Y')
# Grab the setup entry
distribution = pkg_resources.require(project)[0]
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = distribution.version
# The full version, including alpha/beta/rc tags.
release = distribution.version
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['links.rst']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# Some variables which are useful for generated material
project_variable = project.replace('.', '_')
short_description = u'bob.paper.makeup_aim'
owner = [u'Idiap Research Institute']
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = project_variable
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = 'img/logo.png'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
html_favicon = 'img/favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = project_variable + u'_doc'
# -- Post configuration --------------------------------------------------------
# Included after all input documents
rst_epilog = """
.. |project| replace:: Bob
.. |version| replace:: %s
.. |current-year| date:: %%Y
""" % (version,)
# Default processing flags for sphinx
autoclass_content = 'class'
autodoc_member_order = 'bysource'
autodoc_default_flags = [
'members',
'undoc-members',
'inherited-members',
'show-inheritance',
]
# For inter-documentation mapping:
from bob.extension.utils import link_documentation, load_requirements
sphinx_requirements = "extra-intersphinx.txt"
if os.path.exists(sphinx_requirements):
intersphinx_mapping = link_documentation(
additional_packages=['python','numpy'] + \
load_requirements(sphinx_requirements)
)
else:
intersphinx_mapping = link_documentation()
# We want to remove all private (i.e. _. or __.__) members
# that are not in the list of accepted functions
accepted_private_functions = ['__array__']
def member_function_test(app, what, name, obj, skip, options):
# test if we have a private function
if len(name) > 1 and name[0] == '_':
# test if this private function should be allowed
if name not in accepted_private_functions:
# omit privat functions that are not in the list of accepted private functions
return skip
else:
# test if the method is documented
if not hasattr(obj, '__doc__') or not obj.__doc__:
return skip
return False
def setup(app):
app.connect('autodoc-skip-member', member_function_test)
.. vim: set fileencoding=utf-8 :
========================================================================================
Pre-requisites and Setting-up the experiments
========================================================================================
Downloading the dataset
------------------------------
The experiments described in this paper are based on 4 makeup datasets.
The first three datasets: **YMU**, **MIW**, and **MIFS** should be downloaded from `<http://www.antitza.com/makeup-datasets.html>`_
by contacting their owners.
These datasets may be available in different data structures or files. We have provided a script for each dataset that
should help you in converting these datasets as a set of individual samples stored as *.hdf5* file. This process will convert
them into compatible formats. These scripts are located in ``bob.paper.makeup_aim.misc``--- which you need to run from the corresponding folder.
For each script, the command should be specified as::
$python generate_<db-name>_db.py <original-data-directory> <output-directory>
The formatted dataset will be stored in the ``output-directory``.
The dataset **AIM** used in this study should be downloaded from Idiap's server.
For all 4 datasets, you need to set the path to the dataset in the configuration file. Bob supports a configuration file (``~/.bob_bio_databases.txt``) in your home directory to specify where the
databases are located. Please specify the paths for the database like below (by editing the file manually) for datasets: AIM, YMU, MIW, and MIFS::
$ cat ~/.bob_bio_databases.txt
[<dataset-name-in-caps>_DIRECTORY] = <path-of-dataset-location>
The metadata used for AIM is a part of **WMCA** dataset which should be downloaded from Idiap.
Downloading the face recognition CNN model
------------------------------------------------------------------------------
Pre-trained face recognition (FR) model of ``LightCNN-9`` can be downloaded from here_, or its own website.
The location of this model should be stored in ``.bobrc`` file in your $HOME directory in a json (key:value) format as follows::
{
"LIGHTCNN9_MODEL_DIRECTORY": <path-of-the-directory>
}
.. _here: https://github.com/AlfredXiangWu/LightCNN
Only the directory should be specified. Do *not* include the model name.
Setting up annotation directories
---------------------------------------
You should specify the annotation directory for each dataset in configuration file (``~/.bob_bio_databases.txt``).
To generate annotations for YMU, MIW, and MIFS datasets, use the script ``annotate_db.py`` provided in this package.
The images in YMU and MIW datasets have already been cropped to the face region, and hence, the face detector used in our work
is sometimes unable to localize various landmarks in the face (required for subsequent alignment). Therefore, it is a good idea
to *pad* the face image before detection of facial landmarks. You should provide this padding width as a parameter to the ``annotate_db.py`` script.
The padding is temporary. It does not alter the stored images in dataset. Also, the annotations are modified to eliminate the effect of padding.
The command has following syntax::
$ python bin/annotate_db.py <dataset-directory> <annotation-directory> <padding-width>
Here, the ``dataset-directory`` is same as the directory where generated datasets have been stored.
The annotation directory will contain the details of annotations. This path of directory, for each dataset, should be stored in
the configuration file (``~/.bob_bio_databases.txt``) similar to previous step. The entries should have a following format::
$ cat ~/.bob_bio_databases.txt
[<dataset-name-in-caps>_ANNOTATION_DIRECTORY] = <path-of-annotation-directory>
For the experiments conducted in this work, ``padding-with`` was set to 25, 25, and 0 for YMU, MIW, and MIFS, respectively.
You do not need to compute annotations for AIM dataset. Just setup the annotation directory in configuration file.
The annotations will be computed and stored when the experiment is executed for the first time. These annotations will be
re-used for subsequent runs.
.. vim: set fileencoding=utf-8 :
.. _bob.paper.makeup_aim:
==========================================================================================================================
Detection of Age-Induced Makeup Attacks on Face Recognition Systems Using Multi-Layer Deep Features
==========================================================================================================================
This package is part of the Bob_ toolkit and it allows to reproduce the experimental results in the following paper::
@article{MAKEUP-AIM-TBIOM-2019,
title = {{Detection of Age-Induced Makeup Attacks on Face Recognition Systems Using Multi-Layer Deep Features}},
author = {{K. Kotwal, Z. Mostaani, and S. Marcel}},
journal = {{Submitted to **IEEE Transactions on Biometrics, Behavior, and Identity Science**}},
volume = {00},
number = {0},
pages = {000--000},
month = {{0}},
year = {2019},
}
.. _Bob: https://www.idiap.ch/software/bob
If you use this package and/or its results, please cite the paper [TBIOM2019]_.
User guide
---------------------
.. toctree::
:maxdepth: 2
expt_setup
running_pad
references
.. vim: set fileencoding=utf-8 :
==================
References
==================
.. [TBIOM2019] *K. Kotwal, Z. Mostaani, and S. Marcel*, **Detection of Age-Induced Makeup Attacks on Face Recognition Systems Using Multi-Layer Deep Features**,
Submitted to: IEEE Transactions on Biometrics, Behavior, and Identity Science.
.. vim: set fileencoding=utf-8 :
========================================================================================
Executing Experiments for Detection of Makeup Attacks
========================================================================================
Generating set of commands
---------------------------------------
The complete setup has 5 experiments of makeup detection. To facilitate quick running and evaluation of experiments, a simple script is provided to programmatically generate all commands.
You can specify the ``base directory`` where all the results should be stored, and few other parameters in ``config.ini`` in the present folder.
Run the python script ``generate_commands.py`` in the same folder.
As a result, a new text file ``commands.txt`` will be generated in the same folder which consists of necessary commands. The commands are divided into 5 sections:
1. **TBD** Vulnerability
2. AIM PAD
3. YMU Cross-validation
4. Cross-dataset (Training on YMU)
5. Cross-dataset (Training on MIFS)
Running the experiments
---------------------------------------
Run commands from ``commands.txt`` to execute experiments, and also to evaluate and plot their results.
Note: Whenever you are working with already preprocessed data, you may use the corresponding flags in ``spoof.py`` command to avoid invoking the preprocessing commands.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment