First commit

parents
*~
*.swp
*.pyc
bin
eggs
parts
.installed.cfg
.mr.developer.cfg
*.egg-info
src
develop-eggs
sphinx
dist
*.aux
*.log
.DS_Store
.. vim: set fileencoding=utf-8 :
.. Sat Aug 20 07:33:55 CEST 2016
.. image:: https://img.shields.io/badge/docs-stable-yellow.svg
:target: https://www.idiap.ch/software/bob/docs/bob/bob.paper.tifs2018_dsu/stable/index.html
.. image:: http://img.shields.io/badge/docs-latest-orange.svg
:target: https://www.idiap.ch/software/bob/docs/bob/bob.paper.tifs2018_dsu/master/index.html
.. image:: https://gitlab.idiap.ch/bob/bob.paper.tifs2018_dsu/badges/master/build.svg
:target: https://gitlab.idiap.ch/bob/bob.paper.tifs2018_dsu/commits/master
.. image:: https://gitlab.idiap.ch/bob/bob.paper.tifs2018_dsu/badges/master/coverage.svg
:target: https://gitlab.idiap.ch/bob/bob.paper.tifs2018_dsu/commits/master
.. image:: https://img.shields.io/badge/gitlab-project-0000c0.svg
:target: https://gitlab.idiap.ch/bob/bob.paper.tifs2018_dsu
.. image:: https://img.shields.io/pypi/v/bob.paper.tifs2018_dsu.svg
:target: https://pypi.python.org/pypi/bob.paper.tifs2018_dsu
=======================================================================================================================
Python package to reproduce the experiments from the paper "Heterogeneous Face Recognition Using Domain Specific Units"
=======================================================================================================================
This package is part of the signal-processing and machine learning toolbox
Bob_.
This package is part of the ``bob.bio`` packages, which allow to run comparable and reproducible biometric recognition experiments on publicly available databases.
Installation
------------
Follow our `installation`_ instructions. Then, using the Python interpreter
provided by the distribution, bootstrap and buildout this package::
$ conda install bob.thesis.tiago
$ bob bio htface --help
Contact
-------
For questions or reporting issues to this software package, contact our
development `mailing list`_.
.. Place your references here:
.. _bob: https://www.idiap.ch/software/bob
.. _installation: https://gitlab.idiap.ch/bob/bob/wikis/Installation
.. _mailing list: https://groups.google.com/forum/?fromgroups#!forum/bob-devel
# see https://docs.python.org/3/library/pkgutil.html
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
# see https://docs.python.org/3/library/pkgutil.html
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
# see https://docs.python.org/3/library/pkgutil.html
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
##############################################################################
#
# Copyright (c) 2006 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Bootstrap a buildout-based project
Simply run this script in a directory containing a buildout.cfg.
The script accepts buildout command-line options, so you can
use the -c option to specify an alternate configuration file.
"""
import os
import shutil
import sys
import tempfile
from optparse import OptionParser
__version__ = '2015-07-01'
# See zc.buildout's changelog if this version is up to date.
tmpeggs = tempfile.mkdtemp(prefix='bootstrap-')
usage = '''\
[DESIRED PYTHON FOR BUILDOUT] bootstrap.py [options]
Bootstraps a buildout-based project.
Simply run this script in a directory containing a buildout.cfg, using the
Python that you want bin/buildout to use.
Note that by using --find-links to point to local resources, you can keep
this script from going over the network.
'''
parser = OptionParser(usage=usage)
parser.add_option("--version",
action="store_true", default=False,
help=("Return bootstrap.py version."))
parser.add_option("-t", "--accept-buildout-test-releases",
dest='accept_buildout_test_releases',
action="store_true", default=False,
help=("Normally, if you do not specify a --version, the "
"bootstrap script and buildout gets the newest "
"*final* versions of zc.buildout and its recipes and "
"extensions for you. If you use this flag, "
"bootstrap and buildout will get the newest releases "
"even if they are alphas or betas."))
parser.add_option("-c", "--config-file",
help=("Specify the path to the buildout configuration "
"file to be used."))
parser.add_option("-f", "--find-links",
help=("Specify a URL to search for buildout releases"))
parser.add_option("--allow-site-packages",
action="store_true", default=False,
help=("Let bootstrap.py use existing site packages"))
parser.add_option("--buildout-version",
help="Use a specific zc.buildout version")
parser.add_option("--setuptools-version",
help="Use a specific setuptools version")
parser.add_option("--setuptools-to-dir",
help=("Allow for re-use of existing directory of "
"setuptools versions"))
options, args = parser.parse_args()
if options.version:
print("bootstrap.py version %s" % __version__)
sys.exit(0)
######################################################################
# load/install setuptools
try:
from urllib.request import urlopen
except ImportError:
from urllib2 import urlopen
ez = {}
if os.path.exists('ez_setup.py'):
exec(open('ez_setup.py').read(), ez)
else:
exec(urlopen('https://bootstrap.pypa.io/ez_setup.py').read(), ez)
if not options.allow_site_packages:
# ez_setup imports site, which adds site packages
# this will remove them from the path to ensure that incompatible versions
# of setuptools are not in the path
import site
# inside a virtualenv, there is no 'getsitepackages'.
# We can't remove these reliably
if hasattr(site, 'getsitepackages'):
for sitepackage_path in site.getsitepackages():
# Strip all site-packages directories from sys.path that
# are not sys.prefix; this is because on Windows
# sys.prefix is a site-package directory.
if sitepackage_path != sys.prefix:
sys.path[:] = [x for x in sys.path
if sitepackage_path not in x]
setup_args = dict(to_dir=tmpeggs, download_delay=0)
if options.setuptools_version is not None:
setup_args['version'] = options.setuptools_version
if options.setuptools_to_dir is not None:
setup_args['to_dir'] = options.setuptools_to_dir
ez['use_setuptools'](**setup_args)
import setuptools
import pkg_resources
# This does not (always?) update the default working set. We will
# do it.
for path in sys.path:
if path not in pkg_resources.working_set.entries:
pkg_resources.working_set.add_entry(path)
######################################################################
# Install buildout
ws = pkg_resources.working_set
setuptools_path = ws.find(
pkg_resources.Requirement.parse('setuptools')).location
# Fix sys.path here as easy_install.pth added before PYTHONPATH
cmd = [sys.executable, '-c',
'import sys; sys.path[0:0] = [%r]; ' % setuptools_path +
'from setuptools.command.easy_install import main; main()',
'-mZqNxd', tmpeggs]
find_links = os.environ.get(
'bootstrap-testing-find-links',
options.find_links or
('http://downloads.buildout.org/'
if options.accept_buildout_test_releases else None)
)
if find_links:
cmd.extend(['-f', find_links])
requirement = 'zc.buildout'
version = options.buildout_version
if version is None and not options.accept_buildout_test_releases:
# Figure out the most recent final version of zc.buildout.
import setuptools.package_index
_final_parts = '*final-', '*final'
def _final_version(parsed_version):
try:
return not parsed_version.is_prerelease
except AttributeError:
# Older setuptools
for part in parsed_version:
if (part[:1] == '*') and (part not in _final_parts):
return False
return True
index = setuptools.package_index.PackageIndex(
search_path=[setuptools_path])
if find_links:
index.add_find_links((find_links,))
req = pkg_resources.Requirement.parse(requirement)
if index.obtain(req) is not None:
best = []
bestv = None
for dist in index[req.project_name]:
distv = dist.parsed_version
if _final_version(distv):
if bestv is None or distv > bestv:
best = [dist]
bestv = distv
elif distv == bestv:
best.append(dist)
if best:
best.sort()
version = best[-1].version
if version:
requirement = '=='.join((requirement, version))
cmd.append(requirement)
import subprocess
if subprocess.call(cmd) != 0:
raise Exception(
"Failed to execute command:\n%s" % repr(cmd)[1:-1])
######################################################################
# Import and run buildout
ws.add_entry(tmpeggs)
ws.require(requirement)
import zc.buildout.buildout
if not [a for a in args if '=' not in a]:
args.append('bootstrap')
# if -c was provided, we push it back into args for buildout' main function
if options.config_file is not None:
args[0:0] = ['-c', options.config_file]
zc.buildout.buildout.main(args)
shutil.rmtree(tmpeggs)
; vim: set fileencoding=utf-8 :
; Sat Aug 20 07:33:55 CEST 2016
[buildout]
parts = scripts
eggs = bob.paper.tifs2018_dsu
bob.bio.face_ongoing
bob.bio.htface
extensions = bob.buildout
mr.developer
auto-checkout = *
develop = src/bob.bio.face_ongoing
src/bob.bio.htface
.
newest = false
verbose = true
[sources]
bob.bio.face_ongoing = git git@gitlab.idiap.ch:bob/bob.bio.face_ongoing
bob.bio.htface = git git@gitlab.idiap.ch:bob/bob.bio.htface
[scripts]
recipe = bob.buildout:scripts
dependent-scripts = true
{% set name = 'bob.paper.tifs2018_dsu' %}
{% set project_dir = environ.get('RECIPE_DIR') + '/..' %}
package:
name: {{ name }}
version: {{ environ.get('BOB_PACKAGE_VERSION', '0.0.1') }}
build:
skip: true # [not linux]
entry_points:
number: {{ environ.get('BOB_BUILD_NUMBER', 0) }}
run_exports:
- {{ pin_subpackage(name) }}
script:
- cd {{ project_dir }}
{% if environ.get('BUILD_EGG') %}
- python setup.py sdist --formats=zip
{% endif %}
- python setup.py install --single-version-externally-managed --record record.txt
requirements:
host:
- python {{ python }}
- setuptools {{ setuptools }}
- bob.bio.htface
- bob.bio.face_ongoing
- bob.db.cuhk_cufs
- bob.db.cuhk_cufsf
- bob.db.pola_thermal
- bob.db.nivl
- bob.db.cbsr_nir_vis_2
run:
- python
- setuptools
- matplotlib
- six
- wheel
- tensorflow
- numpy
test:
imports:
- {{ name }}
commands:
- bob bio htface --help
- bob bio face_ongoing --help
- nosetests --with-coverage --cover-package=bob.bio.htface -sv bob.bio.htface
- sphinx-build -aEW {{ project_dir }}/doc {{ project_dir }}/sphinx
- sphinx-build -aEb doctest {{ project_dir }}/doc sphinx
- conda inspect linkages -p $PREFIX {{ name }} # [not win]
- conda inspect objects -p $PREFIX {{ name }} # [osx]
requires:
- bob-devel {{ bob_devel }}.*
- nose
- coverage
- sphinx
- sphinx_rtd_theme
- gridtk
about:
home: https://www.idiap.ch/software/bob/
license: BSD License
summary: Tools to reproduce the experiments of the Ph.D. thesis from Tiago de Freitas Pereira
license_family: BSD
.. vim: set fileencoding=utf-8 :
.. Tiago de Freitas Pereira <tiago.pereira@idiap.ch>
====
FAQ
====
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
import os
import sys
import glob
import pkg_resources
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.3'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.ifconfig',
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.doctest',
'sphinx.ext.graphviz',
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon',
'sphinx.ext.viewcode',
'sphinx.ext.mathjax',
'matplotlib.sphinxext.plot_directive'
]
# Be picky about warnings
nitpicky = True
# Ignores stuff we can't easily resolve on other project's sphinx manuals
nitpick_ignore = []
# Allows the user to override warnings from a separate file
if os.path.exists('nitpick-exceptions.txt'):
for line in open('nitpick-exceptions.txt'):
if line.strip() == "" or line.startswith("#"):
continue
dtype, target = line.split(None, 1)
target = target.strip()
try: # python 2.x
target = unicode(target)
except NameError:
pass
nitpick_ignore.append((dtype, target))
# Always includes todos
todo_include_todos = True
# Generates auto-summary automatically
autosummary_generate = True
# Create numbers on figures with captions
numfig = True
# If we are on OSX, the 'dvipng' path maybe different
dvipng_osx = '/opt/local/libexec/texlive/binaries/dvipng'
if os.path.exists(dvipng_osx): pngmath_dvipng = dvipng_osx
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'bob.thesis.tiago'
import time
copyright = u'%s, Idiap Research Institute' % time.strftime('%Y')
# Grab the setup entry
distribution = pkg_resources.require(project)[0]
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = distribution.version
# The full version, including alpha/beta/rc tags.
release = distribution.version
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['links.rst']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# Some variables which are useful for generated material
project_variable = project.replace('.', '_')
short_description = u'Learning how to recognize faces in heterogeneous environments Ph.D. thesis'
owner = [u'Idiap Research Institute']
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = project_variable
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = 'img/logo.png'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
html_favicon = 'img/favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = project_variable + u'_doc'
# -- Post configuration --------------------------------------------------------
# Included after all input documents
rst_epilog = """
.. |project| replace:: Bob
.. |version| replace:: %s
.. |current-year| date:: %%Y
""" % (version,)
# Default processing flags for sphinx
autoclass_content = 'class'
autodoc_member_order = 'bysource'
autodoc_default_flags = [
'members',
'undoc-members',
'show-inheritance',
]
# For inter-documentation mapping:
from bob.extension.utils import link_documentation, load_requirements
sphinx_requirements = "extra-intersphinx.txt"
if os.path.exists(sphinx_requirements):
intersphinx_mapping = link_documentation(
additional_packages=['python','numpy'] + \
load_requirements(sphinx_requirements)
)
else:
intersphinx_mapping = link_documentation()
# We want to remove all private (i.e. _. or __.__) members
# that are not in the list of accepted functions
accepted_private_functions = ['__array__']
def member_function_test(app, what, name, obj, skip, options):
# test if we have a private function
if len(name) > 1 and name[0] == '_':
# test if this private function should be allowed
if name not in accepted_private_functions:
# omit privat functions that are not in the list of accepted private functions
return skip
else:
# test if the method is documented
if not hasattr(obj, '__doc__') or not obj.__doc__:
return skip
return False
def setup(app):
app.connect('autodoc-skip-member', member_function_test)
.. vim: set fileencoding=utf-8 :
.. Tiago de Freitas Pereira <tiago.pereira@idiap.ch>
=============================
Heterogeneous Face Databases
=============================
CUHK Face Sketch Database (CUFS)
--------------------------------
.. _db-CUHK-CUFS:
CUHK Face Sketch database (`CUFS <http://mmlab.ie.cuhk.edu.hk/archive/facesketch.html>`_) is composed by viewed sketches.
It includes 188 faces from the Chinese University of Hong Kong (CUHK) student database, 123 faces from the `AR database <http://www2.ece.ohio-state.edu/~aleix/ARdatabase.html>`_ and 295 faces from the `XM2VTS database <http://www.ee.surrey.ac.uk/CVSSP/xm2vtsdb/>`_.
There are 606 face images in total.
For each face image, there is a sketch drawn by an artist based on a photo taken in a frontal pose, under normal lighting condition and with a neutral expression.
There is no evaluation protocol established for this database.
Each work that uses this database implements a different way to report the results.
In [Wang2009]_ the 606 identities were split in three sets (153 identities for training, 153 for development, 300 for evaluation).
The rank-1 identification rate in the evaluation set is used as performance measure.
Unfortunately the file names for each set were not distributed.
In [Klare2013]_ the authors created a protocol based on a 5-fold cross validation splitting the 606 identities in two sets with 404 identities for training and 202 for testing.
The average rank-1 identification rate is used as performance measure.
In [Bhatt2012]_, the authors evaluated the error rates using only the pairs (VIS -- Sketch) corresponding to the CUHK Student Database and AR Face Database and in [Bhatt2010]_ the authors used only the pairs corresponding to the CUHK Student Database.
In [Yi2015]_ the authors created a protocol based on a 10-fold cross validation splitting the 606 identities in two sets with 306 identities for training and 300 for testing.
Also the average rank-1 identification error rate in the test is used to report the results.
Finally in [Roy2016]_, since the method does not requires a background model, the whole 606 identities were used for evaluation and also to tune the hype-parameters; which is not a good practice in machine learning.
Just by reading what is written in the paper (no source code available), we can claim that the evaluation is biased.
For comparison reasons, we will follow the same strategy as in [Klare2013]_ and do a 5 fold cross-validation splitting the 606 identities in two sets with 404 identities for training and 202 for testing and use the average rank-1 identification rate, in the evaluation set as a metric.
For reproducibility purposes, this evaluation protocol is published in a python package `format <https://pypi.python.org/pypi/bob.db.cuhk_cufs>`_.
In this way future researchers will be able to reproduce exactly the same tests with the same identities in each fold (which is not possible today).
CASIA NIR-VIS 2.0 face database
-------------------------------
CASIA NIR-VIS 2.0 database [Li2013]_ offers pairs of mugshot images and their correspondent NIR photos.
The images of this database were collected in four recording sessions: 2007 spring, 2009 summer, 2009 fall and 2010 summer, in which the first session is identical to the CASIA HFB database [Li2009]_.
It consists of 725 subjects in total.
There are [1-22] VIS and [5-50] NIR face images per subject.
The eyes positions are also distributed with the images.
This database has a well defined protocol and it is publicly available for `download <http://www.cbsr.ia.ac.cn/english/NIR-VIS-2.0-Database.html>`_.
We also organized this protocol in the same way as for CUFS database and it is also freely available for download `(bob.db.cbsr_nir_vis_2) <https://pypi.python.org/pypi/bob.db.cbsr_nir_vis_2>`_.
The average rank-1 identification rate in the evaluation set (called view 2) is used as an evaluation metric.
Pola Thermal
------------
.. _db-polathermal:
Collected by the U.S. Army Research Laboratory (ARL), the Polarimetric Thermal Face Database (first of this kind), contains polarimetric LWIR (longwave infrared) imagery and simultaneously acquired visible spectrum imagery from a set of 60 distinct subjects.
For the data collection, each subject was asked to sit in a chair and remove his or her glasses.
A floor lamp with a compact fluorescent light bulb rated at 1550 lumens was placed 2m in front of the chair to illuminate the scene for
the visible cameras and a uniform background was placed approximately 0.1m behind the chair.
Data was collected at three distances: Range 1 (2.5m), Range 2 (5m), and Range 3 (7.5m).
At each range, a baseline condition is first acquired where the subject is asked to maintain a neutral expression looking at the polarimetric thermal imager.
A second condition, which is referred as the "expressions" condition, was collected where the subject is asked to count out loud numerically from one upwards.
Counting orally results in a continuous range of motions of the mouth, and to some extent, the eyes, which can be recorded to produce variations in the facial imagery.
For each acquisition, 500 frames are recorded with the polarimeter (duration of 8.33 s at 60 fps), while 300 frames are recorded with each visible spectrum camera (duration of 10s at 30 fps).
For reproducibility purposes, the evaluation protocols of this database is freelly available in a python package `(bob.db.pola_thermal) <https://pypi.python.org/pypi/bob.db.pola_thermal>`_.
Near-Infrared and Visible-Light (NIVL) Dataset
----------------------------------------------
.. _db-nivl:
Collected by University of Notre Dame, the NIVL contains VIS and NIR face images from the same subjects.
The capturing process was carried out over the course of two semesters (fall 2011 and spring 2012).
The VIS images were collected using a Nikon D90 camera.
The Nikon D90 uses a :math:`23.6 \times 15.8` mm CMOS sensor and the resulting images have a :math:`4288 \times 2848` resolution.
The images were acquired using automatic exposure and automatic focus settings.
All images were acquired under normal indoor lighting at about a 5-foot standoff with frontal pose and a neutral facial expression.