Commit 9233a5d7 authored by Manuel Günther's avatar Manuel Günther
Browse files

Added lots of documentation

parent c0b568b6
Example buildout environment
============================
.. vim: set fileencoding=utf-8 :
.. Andre Anjos <andre.anjos@idiap.ch>
.. Thu 30 Jan 08:46:53 2014 CET
This simple example demonstrates how to wrap Bob-based scripts on buildout
environments. This may be useful for homework assignments, tests or as a way to
distribute code to reproduce your publication. In summary, if you need to give
out code to others, we recommend you do it following this template so your code
can be tested, documented and run in an orderly fashion.
.. image:: http://img.shields.io/badge/docs-stable-yellow.png
:target: http://pythonhosted.org/bob.bio.video/index.html
.. image:: http://img.shields.io/badge/docs-latest-orange.png
:target: https://www.idiap.ch/software/bob/docs/latest/bioidiap/bob.bio.video/master/index.html
.. image:: http://travis-ci.org/bioidiap/bob.bio.video.svg?branch=master
:target: https://travis-ci.org/bioidiap/bob.bio.video?branch=master
.. image:: https://coveralls.io/repos/bioidiap/bob.bio.video/badge.png?branch=master
:target: https://coveralls.io/r/bioidiap/bob.bio.video?branch=master
.. image:: https://img.shields.io/badge/github-master-0000c0.png
:target: https://github.com/bioidiap/bob.bio.video/tree/master
.. image:: http://img.shields.io/pypi/v/bob.bio.video.png
:target: https://pypi.python.org/pypi/bob.bio.video
.. image:: http://img.shields.io/pypi/dm/bob.bio.video.png
:target: https://pypi.python.org/pypi/bob.bio.video
Installation
------------
=======================================
Run video face recognition algorithms
=======================================
This package is part of the ``bob.bio`` packages, which allow to run comparable and reproducible biometric recognition experiments on publicly available databases.
.. note::
This package contains functionality to run video face recognition experiments.
It is an extension to the `bob.bio.base <http://pypi.python.org/pypi/bob.bio.base>`_ package, which provides the basic scripts.
In this package, wrapper classes are provide, which allow to run traditional image-based face recognition algorithms on video data.
To follow these instructions locally you will need a local copy of this
package. For that, you can use the github tarball API to download the package::
For further information about ``bob.bio``, please read `its Documentation <http://pythonhosted.org/bob.bio.base/index.html>`_.
$ wget --no-check-certificate https://github.com/idiap/bob.project.example/tarball/master -O- | tar xz
$ mv idiap-bob.project* bob.project.example
Installation
------------
To install this package -- alone or together with other `Packages of Bob <https://github.com/idiap/bob/wiki/Packages>`_ -- please read the `Installation Instructions <https://github.com/idiap/bob/wiki/Installation>`_.
For Bob_ to be able to work properly, some dependent packages are required to be installed.
Please make sure that you have read the `Dependencies <https://github.com/idiap/bob/wiki/Dependencies>`_ for your operating system.
Documentation and Further Information
-------------------------------------
Documentation
-------------
For further documentation on this package, please read the `Stable Version <http://pythonhosted.org/bob.bio.video/index.html>`_ or the `Latest Version <https://www.idiap.ch/software/bob/docs/latest/bioidiap/bob.bio.video/master/index.html>`_ of the documentation.
For a list of tutorials on this or the other packages ob Bob_, or information on submitting issues, asking questions and starting discussions, please visit its website.
Please refer to the latest Bob user guide, accessing from the `Bob website
<http://idiap.github.com/bob/>`_ for how to create your own packages based on
this example. In particular, the Section entitled `Organize Your Work in
Satellite Packages <http://www.idiap.ch/software/bob/docs/releases/last/sphinx/html/OrganizeYourCode.html>`_
contains details on how to setup, build and roll out your code.
.. _bob: https://www.idiap.ch/software/bob
......@@ -3,8 +3,30 @@ import bob.io.base
from .. import utils
class Algorithm (bob.bio.base.algorithm.Algorithm):
class Wrapper (bob.bio.base.algorithm.Algorithm):
"""Wrapper class to run face recognition algorithms on video data.
This class provides a generic interface for all face recognition algorithms to use several frames of a video.
The ``algorithm`` can either be provided as a registered resource, or an instance of an extractor class.
Already in previous stages, features were extracted from only some selected frames of the image.
This algorithm now uses these features to perform face recognition, i.e., by enrolling a model from several frames (possibly of several videos), and fusing scores from several model frames and several probe frames.
Since the functionality to handle several images for enrollment and probing is already implemented in the wrapped class, here we only care about providing the right data at the right time.
**Parameters:**
algorithm : str or :py:class:`bob.bio.base.algorithm.Algorithm` instance
The algorithm to be used.
frame_selector : :py:class:`bob.bio.video.FrameSelector`
A frame selector class to define, which frames of the extracted features of the frame container to use.
By default, all features are selected.
compressed_io : bool
Use compression to write the projected features to HDF5 files.
This is experimental and might cause trouble.
Use this flag with care.
"""
def __init__(self,
algorithm,
frame_selector = utils.FrameSelector(selection_style='all'),
......@@ -38,27 +60,65 @@ class Algorithm (bob.bio.base.algorithm.Algorithm):
def _check_feature(self, frames):
"""Checks if the given feature is in the desired format."""
assert isinstance(frames, utils.FrameContainer)
# PROJECTION
def train_projector(self, data_list, projector_file):
"""Trains the projector using features from selected frames."""
def train_projector(self, training_frames, projector_file):
"""Trains the projector with the features of the given frames.
.. note::
This function is not called, when the given ``algorithm`` does not require projector training.
This function will train the projector using all data from the selected frames of the training data.
The training_frames must be aligned by client if the given ``algorithm`` requires that.
**Parameters:**
training_frames : [:py:class:`bob.bio.video.FrameContainer`] or [[:py:class:`bob.bio.video.FrameContainer`]]
The set of training frames, which will be used to perform projector training of the ``algorithm``.
extractor_file : str
The name of the projector that should be written.
"""
if self.split_training_features_by_client:
[self._check_feature(frames) for client_frames in data_list for frames in client_frames]
training_features = [[frame[1] for frames in client_frames for frame in self.frame_selector(frames)] for client_frames in data_list]
[self._check_feature(frames) for client_frames in training_frames for frames in client_frames]
training_features = [[frame[1] for frames in client_frames for frame in self.frame_selector(frames)] for client_frames in training_frames]
else:
[self._check_feature(frames) for frames in data_list]
training_features = [frame[1] for frames in data_list for frame in self.frame_selector(frames)]
[self._check_feature(frames) for frames in training_frames]
training_features = [frame[1] for frames in training_frames for frame in self.frame_selector(frames)]
self.algorithm.train_projector(training_features, projector_file)
def load_projector(self, projector_file):
"""Loads the trained extractor from file.
This function calls the wrapped classes ``load_projector`` function.
projector_file : str
The name of the projector that should be loaded.
"""
return self.algorithm.load_projector(projector_file)
def project(self, frames):
"""Projects each frame and saves them in a frame container."""
"""project(frames) -> projected
Projects the frames from the extracted frames and returns a frame container.
This function is used to project features using the desired ``algorithm`` for all frames that are selected by the ``frame_selector`` specified in the constructor of this class.
**Parameters:**
frames : :py:class:`bob.bio.video.FrameContainer`
The frame container containing extracted feature frames.
**Returns:**
projected : :py:class:`bob.bio.video.FrameContainer`
A frame container containing projected features.
"""
self._check_feature(frames)
fc = utils.FrameContainer()
for index, frame, quality in self.frame_selector(frames):
......@@ -70,50 +130,149 @@ class Algorithm (bob.bio.base.algorithm.Algorithm):
return fc
def read_feature(self, projected_file):
"""read_feature(projected_file) -> frames
Reads the projected data from file and returns them in a frame container.
The algorithms ``read_feature`` function is used to read the data for each frame.
**Parameters:**
filename : str
The name of the projected data file.
**Returns:**
frames : :py:class:`bob.bio.video.FrameContainer`
The read frames, stored in a frame container.
"""
if self.compressed_io:
return utils.load_compressed(projected_file, self.algorithm.read_feature)
else:
return utils.FrameContainer(bob.io.base.HDF5File(projected_file), self.algorithm.read_feature)
def write_feature(self, frames, projected_file):
"""Writes the projected features to file.
The extractors ``write_features`` function is used to write the features for each frame.
**Parameters:**
frames : :py:class:`bob.bio.video.FrameContainer`
The projected features for the selected frames, as returned by the :py:meth:`project` function.
projected_file : str
The file name to write the projetced feature into.
"""
self._check_feature(frames)
if self.compressed_io:
return utils.save_compressed(frames, projected_file, self.algorithm.write_feature)
else:
frames.save(bob.io.base.HDF5File(projected_file, 'w'), self.algorithm.write_feature)
def read_feature(self, projected_file):
if self.compressed_io:
return utils.load_compressed(projected_file, self.algorithm.read_feature)
else:
return utils.FrameContainer(bob.io.base.HDF5File(projected_file), self.algorithm.read_feature)
# ENROLLMENT
def train_enroller(self, training_frames, enroller_file):
"""Trains the enroller with the features of the given frames.
.. note::
This function is not called, when the given ``algorithm`` does not require enroller training.
This function will train the enroller using all data from the selected frames of the training data.
**Parameters:**
training_frames : [[:py:class:`bob.bio.video.FrameContainer`]]
The set of training frames aligned by client, which will be used to perform enroller training of the ``algorithm``.
enroller_file : str
The name of the enroller that should be written.
"""
[self._check_feature(frames) for client_frames in training_frames for frames in client_frames]
features = [[frame[1] for frames in client_frames for frame in self.enroll_frame_selector(frames)] for client_frames in training_frames]
self.algorithm.train_enroller(features, enroller_file)
def load_enroller(self, enroller_file):
"""Loads the trained enroller from file.
This function calls the wrapped classes ``load_enroller`` function.
enroller_file : str
The name of the enroller that should be loaded.
"""
self.algorithm.load_enroller(enroller_file)
def enroll(self, enroll_frames):
"""Enrolls the model from features of all images of all videos."""
"""enroll(enroll_frames) -> model
Enrolls the model from features of all selected frames of all enrollment videos for the current client.
This function collects all desired frames from all enrollment videos and enrolls a model with that, using the algorithms ``enroll`` function.
**Parameters:**
enroll_frames : [:py:class:`bob.bio.video.FrameContainer`]
Extracted or projected features from one or several videos of the same client.
**Returns:**
model : object
The model as created by the algorithms ``enroll`` function.
"""
[self._check_feature(frames) for frames in enroll_frames]
features = [frame[1] for frames in enroll_frames for frame in self.enroll_frame_selector(frames)]
return self.algorithm.enroll(features)
def write_model(self, model, filename):
"""Saves the model using the algorithm's save function."""
"""Writes the model using the algorithm's ``write_model`` function.
**Parameters:**
model : object
The model returned by the :py:meth:`enroll` function.
filename : str
The file name of the model to write.
"""
self.algorithm.write_model(model, filename)
# SCORING
def read_model(self, filename):
"""Reads the model using the algorithm's read function."""
"""Reads the model using the algorithms ``read_model`` function.
**Parameters:**
filename : str
The file name to read the model from.
**Returns:**
model : object
The model read from file.
"""
return self.algorithm.read_model(filename)
def read_probe(self, filename):
"""Reads the model using the algorithm's read function."""
"""read_probe(filename) -> probe
Reads the probe using the algorithm's ``read_probe`` function to read the probe features of the single frames.
**Parameters:**
filename : str
The name of the frame container containing the probe file.
**Returns:**
probe : :py:class:`bob.bio.video.FrameContainer`
The frames of the probe file.
"""
# TODO: check if it is really necessary that we read other types than FrameContainers here...
try:
if self.compressed_io:
......@@ -124,17 +283,56 @@ class Algorithm (bob.bio.base.algorithm.Algorithm):
return self.algorithm.read_probe(filename)
def score(self, model, probe):
"""Computes the score between the given model and the probe, which is a list of frames."""
# TODO: check if it is really necessary that we treat other types than FrameContainers here...
if isinstance(probe, utils.FrameContainer):
features = [frame[1] for frame in probe]
return self.algorithm.score_for_multiple_probes(model, features)
else:
return self.algorithm.score(model, probe)
"""score(model, probe) -> score
Computes the score between the given model and the probe.
As the probe is a frame container, several scores are computed, one for each frame of the probe.
This is achieved by using the algorithms ``score_for_multiple_probes`` function.
The final result is, hence, a fusion of several scores.
**Parameters:**
model : object
The model in the type desired by the wrapped algorithm.
probe : :py:class:`bob.bio.video.FrameContainer`
The selected frames from the probe objects, which contains the probes are desired by the wrapped algorithm.
**Returns:**
score : float
A fused score between the given model and all probe frames.
"""
features = [frame[1] for frame in self.frame_selector(probe)]
return self.algorithm.score_for_multiple_probes(model, features)
def score_for_multiple_probes(self, model, probes):
"""Computes the score between the given model and the probes, where each probe is a list of frames."""
"""score_for_multiple_probes(model, probes) -> score
Computes the score between the given model and the given list of probes.
As each probe is a frame container, several scores are computed, one for each frame of each probe.
This is achieved by using the algorithms ``score_for_multiple_probes`` function.
The final result is, hence, a fusion of several scores.
**Parameters:**
model : object
The model in the type desired by the wrapped algorithm.
probes : [:py:class:`bob.bio.video.FrameContainer`]
The selected frames from the probe objects, which contains the probes are desired by the wrapped algorithm.
**Returns:**
score : float
A fused score between the given model and all probe frames.
"""
[self._check_feature(frames) for frames in probes]
probe = [frame[1] for frame in probe for probe in probes]
return self.algorithm.score_for_multiple_probes(model, probe)
# re-define some functions to avoid them being falsely documented
def score_for_multiple_models(*args,**kwargs): raise NotImplementedError("This function is not implemented and should not be called.")
from .Algorithm import Algorithm
from .Wrapper import Wrapper
# gets sphinx autodoc done right - don't remove it
__all__ = [_ for _ in dir() if not _.startswith('_')]
......@@ -4,7 +4,28 @@ import os
from .. import utils
class Extractor (bob.bio.base.extractor.Extractor):
class Wrapper (bob.bio.base.extractor.Extractor):
"""Wrapper class to run feature extraction algorithms on frame containers.
Features are extracted for all frames in the frame container using the provided ``extractor``.
The ``extractor`` can either be provided as a registered resource, i.e., one of :ref:`bob.bio.face.extractors`, or an instance of an extractor class.
The ``frame_selector`` can be chosen to select some frames from the frame container.
By default, all frames from the previous preprocessing step are kept, but fewer frames might be selected in this stage.
**Parameters:**
extractor : str or :py:class:`bob.bio.base.extractor.Extractor` instance
The extractor to be used to extract features from the frames.
frame_selector : :py:class:`bob.bio.video.FrameSelector`
A frame selector class to define, which frames of the preprocessed frame container to use.
compressed_io : bool
Use compression to write the resulting features to HDF5 files.
This is experimental and might cause trouble.
Use this flag with care.
"""
def __init__(self,
extractor,
......@@ -32,11 +53,27 @@ class Extractor (bob.bio.base.extractor.Extractor):
)
def _check_feature(self, frames):
"""Checks if the given feature is in the desired format."""
assert isinstance(frames, utils.FrameContainer)
def __call__(self, frames, annotations=None):
"""Extracts the frames from the video and returns a frame container."""
def __call__(self, frames):
"""__call__(frames) -> features
Extracts the frames from the video and returns a frame container.
This function is used to extract features using the desired ``extractor`` for all frames that are selected by the ``frame_selector`` specified in the constructor of this class.
**Parameters:**
frames : :py:class:`bob.bio.video.FrameContainer`
The frame container containing preprocessed image frames.
**Returns:**
features : :py:class:`bob.bio.video.FrameContainer`
A frame container containing extracted features.
"""
self._check_feature(frames)
# go through the frames and extract the features
fc = utils.FrameContainer()
......@@ -49,12 +86,40 @@ class Extractor (bob.bio.base.extractor.Extractor):
def read_feature(self, filename):
"""read_feature(filename) -> frames
Reads the extracted data from file and returns them in a frame container.
The extractors ``read_feature`` function is used to read the data for each frame.
**Parameters:**
filename : str
The name of the extracted data file.
**Returns:**
frames : :py:class:`bob.bio.video.FrameContainer`
The read frames, stored in a frame container.
"""
if self.compressed_io:
return utils.load_compressed(filename, self.extractor.read_feature)
else:
return utils.FrameContainer(bob.io.base.HDF5File(filename), self.extractor.read_feature)
def write_feature(self, frames, filename):
"""Writes the extracted features to file.
The extractors ``write_features`` function is used to write the features for each frame.
**Parameters:**
frames : :py:class:`bob.bio.video.FrameContainer`
The extracted features for the selected frames, as returned by the :py:meth:`__call__` function.
filename : str
The file name to write the extracted feature into.
"""
self._check_feature(frames)
if self.compressed_io:
return utils.save_compressed(frames, filename, self.extractor.write_feature)
......@@ -62,15 +127,38 @@ class Extractor (bob.bio.base.extractor.Extractor):
frames.save(bob.io.base.HDF5File(filename, 'w'), self.extractor.write_feature)
def train(self, data_list, extractor_file):
"""Trains the feature extractor with the image data of the given frames."""
def train(self, training_frames, extractor_file):
"""Trains the feature extractor with the preprocessed data of the given frames.
.. note::
This function is not called, when the given ``extractor`` does not require training.
This function will train the feature extractor using all data from the selected frames of the training data.
The training_frames must be aligned by client if the given ``extractor`` requires that.
**Parameters:**
training_frames : [:py:class:`bob.bio.video.FrameContainer`] or [[:py:class:`bob.bio.video.FrameContainer`]]
The set of training frames, which will be used to train the ``extractor``.
extractor_file : str
The name of the extractor that should be written.
"""
if self.split_training_data_by_client:
[self._check_feature(frames) for client_frames in data_list for frames in client_frames]
features = [[frame[1] for frames in client_frames for frame in self.frame_selector(frames)] for client_frames in data_list]
[self._check_feature(frames) for client_frames in training_frames for frames in client_frames]
features = [[frame[1] for frames in client_frames for frame in self.frame_selector(frames)] for client_frames in training_frames]
else:
[self._check_feature(frames) for frames in data_list]
features = [frame[1] for frames in data_list for frame in self.frame_selector(frames)]
[self._check_feature(frames) for frames in training_frames]
features = [frame[1] for frames in training_frames for frame in self.frame_selector(frames)]
self.extractor.train(features, extractor_file)
def load(self, extractor_file):
"""Loads the trained extractor from file.
This function calls the wrapped classes ``load`` function.
extractor_file : str
The name of the extractor that should be loaded.
"""
self.extractor.load(extractor_file)
from .Extractor import Extractor
from .Wrapper import Wrapper
# gets sphinx autodoc done right - don't remove it
__all__ = [_ for _ in dir() if not _.startswith('_')]
......@@ -7,7 +7,41 @@ import bob.io.base
from .. import utils
class Preprocessor (bob.bio.base.preprocessor.Preprocessor):
class Wrapper (bob.bio.base.preprocessor.Preprocessor):
"""Wrapper class to run image preprocessing algorithms on video data.
This class provides functionality to read original video data from several databases.
So far, the video content from :py:class:`bob.db.mobio` and the image list content from :py:class:`bob.db.youtube` are supported.
Furthermore, frames are extracted from these video data, and a ``preprocessor`` algorithm is applied on all selected frames.
The preprocessor can either be provided as a registered resource, i.e., one of :ref:`bob.bio.face.preprocessors`, or an instance of a preprocessing class.
Since most of the databases do not provide annotations for all frames of the videos, commonly the preprocessor needs to apply face detection.
The ``frame_selector`` can be chosen to select some frames from the video.
By default, a few frames spread over the whole video sequence are selected.
The ``quality_function`` is used to assess the quality of the frame.
If no ``quality_function`` is given, the quality is based on the face detector, or simply left as ``None``.
So far, the quality of the frames are not used, but it is foreseen to select frames based on quality.
**Parameters:**
preprocessor : str or :py:class:`bob.bio.base.preprocessor.Preprocessor` instance
The preprocessor to be used to preprocess the frames.
frame_selector : :py:class:`bob.bio.video.FrameSelector`
A frame selector class to define, which frames of the video to use.
quality_function : function or ``None``
A function assessing the quality of the preprocessed image.
If ``None``, no quality assessment is performed.
If the preprocessor contains a ``quality`` attribute, this is taken instead.
compressed_io : bool
Use compression to write the resulting preprocessed HDF5 files.
This is experimental and might cause trouble.
Use this flag with care.
"""
def __init__(self,
preprocessor = 'landmark-detect',
......@@ -35,22 +69,40 @@ class Preprocessor (bob.bio.base.preprocessor.Preprocessor):
self.quality_function = quality_function
self.compressed_io = compressed_io
def _check_feature(self, frames):
def _check_data(self, frames):
"""Checks if the given video is in the desired format."""
assert isinstance(frames, utils.FrameContainer)
def __call__(self, frames, annotations=None):
"""Extracts the frames from the video and returns a frame container.
def __call__(self, frames, annotations = None):
"""__call__(frames, annotations = None) -> preprocessed
Preprocesses the given frames using the desired ``preprocessor``.
Faces are extracted for all frames in the given frame container, using the ``preprocessor`` specified in the contructor.
Faces are extracted for all frames in the given frame container, using the ``preprocessor`` specified in the constructor.
If given, the annotations need to be in a dictionary.
The key is either the frame number (for video data) or the image name (for image list data).
The value is another dictionary, building the relation between keypoint names and their location, e.g., {'leye' : (le_y, le_x), 'reye' : (re_y, re_x)}
The value is another dictionary, building the relation between facial landmark names and their location, e.g. ``{'leye' : (le_y, le_x), 'reye' : (re_y, re_x)}``