Commit 9233a5d7 authored by Manuel Günther's avatar Manuel Günther

Added lots of documentation

parent c0b568b6
Example buildout environment
============================
.. vim: set fileencoding=utf-8 :
.. Andre Anjos <andre.anjos@idiap.ch>
.. Thu 30 Jan 08:46:53 2014 CET
This simple example demonstrates how to wrap Bob-based scripts on buildout
environments. This may be useful for homework assignments, tests or as a way to
distribute code to reproduce your publication. In summary, if you need to give
out code to others, we recommend you do it following this template so your code
can be tested, documented and run in an orderly fashion.
.. image:: http://img.shields.io/badge/docs-stable-yellow.png
:target: http://pythonhosted.org/bob.bio.video/index.html
.. image:: http://img.shields.io/badge/docs-latest-orange.png
:target: https://www.idiap.ch/software/bob/docs/latest/bioidiap/bob.bio.video/master/index.html
.. image:: http://travis-ci.org/bioidiap/bob.bio.video.svg?branch=master
:target: https://travis-ci.org/bioidiap/bob.bio.video?branch=master
.. image:: https://coveralls.io/repos/bioidiap/bob.bio.video/badge.png?branch=master
:target: https://coveralls.io/r/bioidiap/bob.bio.video?branch=master
.. image:: https://img.shields.io/badge/github-master-0000c0.png
:target: https://github.com/bioidiap/bob.bio.video/tree/master
.. image:: http://img.shields.io/pypi/v/bob.bio.video.png
:target: https://pypi.python.org/pypi/bob.bio.video
.. image:: http://img.shields.io/pypi/dm/bob.bio.video.png
:target: https://pypi.python.org/pypi/bob.bio.video
Installation
------------
=======================================
Run video face recognition algorithms
=======================================
This package is part of the ``bob.bio`` packages, which allow to run comparable and reproducible biometric recognition experiments on publicly available databases.
.. note::
This package contains functionality to run video face recognition experiments.
It is an extension to the `bob.bio.base <http://pypi.python.org/pypi/bob.bio.base>`_ package, which provides the basic scripts.
In this package, wrapper classes are provide, which allow to run traditional image-based face recognition algorithms on video data.
To follow these instructions locally you will need a local copy of this
package. For that, you can use the github tarball API to download the package::
For further information about ``bob.bio``, please read `its Documentation <http://pythonhosted.org/bob.bio.base/index.html>`_.
$ wget --no-check-certificate https://github.com/idiap/bob.project.example/tarball/master -O- | tar xz
$ mv idiap-bob.project* bob.project.example
Installation
------------
To install this package -- alone or together with other `Packages of Bob <https://github.com/idiap/bob/wiki/Packages>`_ -- please read the `Installation Instructions <https://github.com/idiap/bob/wiki/Installation>`_.
For Bob_ to be able to work properly, some dependent packages are required to be installed.
Please make sure that you have read the `Dependencies <https://github.com/idiap/bob/wiki/Dependencies>`_ for your operating system.
Documentation and Further Information
-------------------------------------
Documentation
-------------
For further documentation on this package, please read the `Stable Version <http://pythonhosted.org/bob.bio.video/index.html>`_ or the `Latest Version <https://www.idiap.ch/software/bob/docs/latest/bioidiap/bob.bio.video/master/index.html>`_ of the documentation.
For a list of tutorials on this or the other packages ob Bob_, or information on submitting issues, asking questions and starting discussions, please visit its website.
Please refer to the latest Bob user guide, accessing from the `Bob website
<http://idiap.github.com/bob/>`_ for how to create your own packages based on
this example. In particular, the Section entitled `Organize Your Work in
Satellite Packages <http://www.idiap.ch/software/bob/docs/releases/last/sphinx/html/OrganizeYourCode.html>`_
contains details on how to setup, build and roll out your code.
.. _bob: https://www.idiap.ch/software/bob
from .Algorithm import Algorithm
from .Wrapper import Wrapper
# gets sphinx autodoc done right - don't remove it
__all__ = [_ for _ in dir() if not _.startswith('_')]
......@@ -4,7 +4,28 @@ import os
from .. import utils
class Extractor (bob.bio.base.extractor.Extractor):
class Wrapper (bob.bio.base.extractor.Extractor):
"""Wrapper class to run feature extraction algorithms on frame containers.
Features are extracted for all frames in the frame container using the provided ``extractor``.
The ``extractor`` can either be provided as a registered resource, i.e., one of :ref:`bob.bio.face.extractors`, or an instance of an extractor class.
The ``frame_selector`` can be chosen to select some frames from the frame container.
By default, all frames from the previous preprocessing step are kept, but fewer frames might be selected in this stage.
**Parameters:**
extractor : str or :py:class:`bob.bio.base.extractor.Extractor` instance
The extractor to be used to extract features from the frames.
frame_selector : :py:class:`bob.bio.video.FrameSelector`
A frame selector class to define, which frames of the preprocessed frame container to use.
compressed_io : bool
Use compression to write the resulting features to HDF5 files.
This is experimental and might cause trouble.
Use this flag with care.
"""
def __init__(self,
extractor,
......@@ -32,11 +53,27 @@ class Extractor (bob.bio.base.extractor.Extractor):
)
def _check_feature(self, frames):
"""Checks if the given feature is in the desired format."""
assert isinstance(frames, utils.FrameContainer)
def __call__(self, frames, annotations=None):
"""Extracts the frames from the video and returns a frame container."""
def __call__(self, frames):
"""__call__(frames) -> features
Extracts the frames from the video and returns a frame container.
This function is used to extract features using the desired ``extractor`` for all frames that are selected by the ``frame_selector`` specified in the constructor of this class.
**Parameters:**
frames : :py:class:`bob.bio.video.FrameContainer`
The frame container containing preprocessed image frames.
**Returns:**
features : :py:class:`bob.bio.video.FrameContainer`
A frame container containing extracted features.
"""
self._check_feature(frames)
# go through the frames and extract the features
fc = utils.FrameContainer()
......@@ -49,12 +86,40 @@ class Extractor (bob.bio.base.extractor.Extractor):
def read_feature(self, filename):
"""read_feature(filename) -> frames
Reads the extracted data from file and returns them in a frame container.
The extractors ``read_feature`` function is used to read the data for each frame.
**Parameters:**
filename : str
The name of the extracted data file.
**Returns:**
frames : :py:class:`bob.bio.video.FrameContainer`
The read frames, stored in a frame container.
"""
if self.compressed_io:
return utils.load_compressed(filename, self.extractor.read_feature)
else:
return utils.FrameContainer(bob.io.base.HDF5File(filename), self.extractor.read_feature)
def write_feature(self, frames, filename):
"""Writes the extracted features to file.
The extractors ``write_features`` function is used to write the features for each frame.
**Parameters:**
frames : :py:class:`bob.bio.video.FrameContainer`
The extracted features for the selected frames, as returned by the :py:meth:`__call__` function.
filename : str
The file name to write the extracted feature into.
"""
self._check_feature(frames)
if self.compressed_io:
return utils.save_compressed(frames, filename, self.extractor.write_feature)
......@@ -62,15 +127,38 @@ class Extractor (bob.bio.base.extractor.Extractor):
frames.save(bob.io.base.HDF5File(filename, 'w'), self.extractor.write_feature)
def train(self, data_list, extractor_file):
"""Trains the feature extractor with the image data of the given frames."""
def train(self, training_frames, extractor_file):
"""Trains the feature extractor with the preprocessed data of the given frames.
.. note::
This function is not called, when the given ``extractor`` does not require training.
This function will train the feature extractor using all data from the selected frames of the training data.
The training_frames must be aligned by client if the given ``extractor`` requires that.
**Parameters:**
training_frames : [:py:class:`bob.bio.video.FrameContainer`] or [[:py:class:`bob.bio.video.FrameContainer`]]
The set of training frames, which will be used to train the ``extractor``.
extractor_file : str
The name of the extractor that should be written.
"""
if self.split_training_data_by_client:
[self._check_feature(frames) for client_frames in data_list for frames in client_frames]
features = [[frame[1] for frames in client_frames for frame in self.frame_selector(frames)] for client_frames in data_list]
[self._check_feature(frames) for client_frames in training_frames for frames in client_frames]
features = [[frame[1] for frames in client_frames for frame in self.frame_selector(frames)] for client_frames in training_frames]
else:
[self._check_feature(frames) for frames in data_list]
features = [frame[1] for frames in data_list for frame in self.frame_selector(frames)]
[self._check_feature(frames) for frames in training_frames]
features = [frame[1] for frames in training_frames for frame in self.frame_selector(frames)]
self.extractor.train(features, extractor_file)
def load(self, extractor_file):
"""Loads the trained extractor from file.
This function calls the wrapped classes ``load`` function.
extractor_file : str
The name of the extractor that should be loaded.
"""
self.extractor.load(extractor_file)
from .Extractor import Extractor
from .Wrapper import Wrapper
# gets sphinx autodoc done right - don't remove it
__all__ = [_ for _ in dir() if not _.startswith('_')]
......@@ -7,7 +7,41 @@ import bob.io.base
from .. import utils
class Preprocessor (bob.bio.base.preprocessor.Preprocessor):
class Wrapper (bob.bio.base.preprocessor.Preprocessor):
"""Wrapper class to run image preprocessing algorithms on video data.
This class provides functionality to read original video data from several databases.
So far, the video content from :py:class:`bob.db.mobio` and the image list content from :py:class:`bob.db.youtube` are supported.
Furthermore, frames are extracted from these video data, and a ``preprocessor`` algorithm is applied on all selected frames.
The preprocessor can either be provided as a registered resource, i.e., one of :ref:`bob.bio.face.preprocessors`, or an instance of a preprocessing class.
Since most of the databases do not provide annotations for all frames of the videos, commonly the preprocessor needs to apply face detection.
The ``frame_selector`` can be chosen to select some frames from the video.
By default, a few frames spread over the whole video sequence are selected.
The ``quality_function`` is used to assess the quality of the frame.
If no ``quality_function`` is given, the quality is based on the face detector, or simply left as ``None``.
So far, the quality of the frames are not used, but it is foreseen to select frames based on quality.
**Parameters:**
preprocessor : str or :py:class:`bob.bio.base.preprocessor.Preprocessor` instance
The preprocessor to be used to preprocess the frames.
frame_selector : :py:class:`bob.bio.video.FrameSelector`
A frame selector class to define, which frames of the video to use.
quality_function : function or ``None``
A function assessing the quality of the preprocessed image.
If ``None``, no quality assessment is performed.
If the preprocessor contains a ``quality`` attribute, this is taken instead.
compressed_io : bool
Use compression to write the resulting preprocessed HDF5 files.
This is experimental and might cause trouble.
Use this flag with care.
"""
def __init__(self,
preprocessor = 'landmark-detect',
......@@ -35,22 +69,40 @@ class Preprocessor (bob.bio.base.preprocessor.Preprocessor):
self.quality_function = quality_function
self.compressed_io = compressed_io
def _check_feature(self, frames):
def _check_data(self, frames):
"""Checks if the given video is in the desired format."""
assert isinstance(frames, utils.FrameContainer)
def __call__(self, frames, annotations=None):
"""Extracts the frames from the video and returns a frame container.
def __call__(self, frames, annotations = None):
"""__call__(frames, annotations = None) -> preprocessed
Preprocesses the given frames using the desired ``preprocessor``.
Faces are extracted for all frames in the given frame container, using the ``preprocessor`` specified in the contructor.
Faces are extracted for all frames in the given frame container, using the ``preprocessor`` specified in the constructor.
If given, the annotations need to be in a dictionary.
The key is either the frame number (for video data) or the image name (for image list data).
The value is another dictionary, building the relation between keypoint names and their location, e.g., {'leye' : (le_y, le_x), 'reye' : (re_y, re_x)}
The value is another dictionary, building the relation between facial landmark names and their location, e.g. ``{'leye' : (le_y, le_x), 'reye' : (re_y, re_x)}``
The annotations for the according frames, if present, are passed to the preprocessor.
Please assure that your database interface provides the annotations in the desired format.
**Parameters:**
frames : :py:class:`bob.bio.video.FrameContainer`
The pre-selected frames, as returned by :py:meth:`read_original_data`.
annotations : dict or ``None``
The annotations for the frames, if any.
**Returns:**
preprocessed : :py:class:`bob.bio.video.FrameContainer`
A frame container that contains the preprocessed frames.
"""
self._check_feature(frames)
self._check_data(frames)
annots = None
fc = utils.FrameContainer()
......@@ -70,24 +122,70 @@ class Preprocessor (bob.bio.base.preprocessor.Preprocessor):
else:
quality = None
# add image to frame container
if hasattr(preprocessed, 'copy'):
preprocessed = preprocessed.copy()
fc.add(index, preprocessed, quality)
return fc
def read_original_data(self, data):
"""read_original_data(data) -> frames
Reads the original data from file and selects some frames using the desired ``frame_selector``.
Currently, two types of data is supported:
1. video data, which is stored in a 3D or 4D :py:class:`numpy.ndarray`, which will be read using :py:func:`bob.io.base.load`
2. image lists, which is given as a list of strings of image file names. Each image will be read with :py:func:`bob.io.base.load`
**Parameters:**
data : 3d or 4D :py:class:`numpy.ndarray`, or [str]
The original data to read.
**Returns:**
frames : :py:class:`bob.bio.video.FrameContainer`
The selected frames, stored in a frame container.
"""
return self.frame_selector(data)
def read_data(self, filename):
"""read_data(filename) -> frames
Reads the preprocessed data from file and returns them in a frame container.
The preprocessors ``read_data`` function is used to read the data for each frame.
**Parameters:**
filename : str
The name of the preprocessed data file.
**Returns:**
frames : :py:class:`bob.bio.video.FrameContainer`
The read frames, stored in a frame container.
"""
if self.compressed_io:
return utils.load_compressed(filename, self.preprocessor.read_data)
else:
return utils.FrameContainer(bob.io.base.HDF5File(filename), self.preprocessor.read_data)
def write_data(self, frames, filename):
self._check_feature(frames)
"""Writes the preprocessed data to file.
The preprocessors ``write_data`` function is used to write the data for each frame.
**Parameters:**
frames : :py:class:`bob.bio.video.FrameContainer`
The preprocessed frames, as returned by the :py:meth:`__call__` function.
filename : str
The name of the preprocessed data file to write.
"""
self._check_data(frames)
if self.compressed_io:
return utils.save_compressed(frames, filename, self.preprocessor.write_data)
......
from .Preprocessor import Preprocessor
from .Wrapper import Wrapper
# gets sphinx autodoc done right - don't remove it
__all__ = [_ for _ in dir() if not _.startswith('_')]
......@@ -18,18 +18,18 @@ def test_algorithm():
# load test data
extracted_file = pkg_resources.resource_filename("bob.bio.video.test", "data/extracted.hdf5")
extractor = bob.bio.video.extractor.Extractor('dummy', compressed_io=False)
extractor = bob.bio.video.extractor.Wrapper('dummy', compressed_io=False)
extracted = extractor.read_feature(extracted_file)
# use video tool with dummy face recognition tool, which contains all required functionality
algorithm = bob.bio.video.algorithm.Algorithm(bob.bio.base.test.dummy.algorithm.DummyAlgorithm(), compressed_io=False)
algorithm = bob.bio.video.algorithm.Wrapper(bob.bio.base.test.dummy.algorithm.DummyAlgorithm(), compressed_io=False)
try:
# projector training
algorithm.train_projector([extracted] * 25, filename)
assert os.path.exists(filename)
algorithm2 = bob.bio.video.algorithm.Algorithm("bob.bio.base.test.dummy.algorithm.DummyAlgorithm()", compressed_io=False)
algorithm2 = bob.bio.video.algorithm.Wrapper("bob.bio.base.test.dummy.algorithm.DummyAlgorithm()", compressed_io=False)
# load projector; will perform checks internally
algorithm2.load_projector(filename)
......
......@@ -19,15 +19,15 @@ def test_extractor():
# load test data
preprocessed_video_file = pkg_resources.resource_filename("bob.bio.video.test", "data/preprocessed.hdf5")
preprocessor = bob.bio.video.preprocessor.Preprocessor('face-crop-eyes', compressed_io=False)
preprocessor = bob.bio.video.preprocessor.Wrapper('face-crop-eyes', compressed_io=False)
preprocessed_video = preprocessor.read_data(preprocessed_video_file)
extractor = bob.bio.video.extractor.Extractor(bob.bio.base.test.dummy.extractor.DummyExtractor(), compressed_io=False)
extractor = bob.bio.video.extractor.Wrapper(bob.bio.base.test.dummy.extractor.DummyExtractor(), compressed_io=False)
extractor.train([preprocessed_video]*5, filename)
assert os.path.exists(filename)
extractor2 = bob.bio.video.extractor.Extractor("dummy", compressed_io=False)
extractor2 = bob.bio.video.extractor.Wrapper("dummy", compressed_io=False)
extractor2.load(filename)
extracted = extractor2(preprocessed_video)
......
......@@ -19,7 +19,7 @@ def test_annotations():
# video preprocessor using a face crop preprocessor
frame_selector = bob.bio.video.FrameSelector(selection_style="all")
preprocessor = bob.bio.video.preprocessor.Preprocessor('face-crop-eyes', frame_selector, compressed_io=False)
preprocessor = bob.bio.video.preprocessor.Wrapper('face-crop-eyes', frame_selector, compressed_io=False)
# read original data
original = preprocessor.read_original_data(image_files)
......@@ -41,7 +41,7 @@ def test_detect():
video_file = pkg_resources.resource_filename("bob.bio.video.test", "data/testvideo.avi")
frame_selector = bob.bio.video.FrameSelector(max_number_of_frames=3, selection_style="spread")
preprocessor = bob.bio.video.preprocessor.Preprocessor('face-detect', frame_selector, compressed_io=False)
preprocessor = bob.bio.video.preprocessor.Wrapper('face-detect', frame_selector, compressed_io=False)
video = preprocessor.read_original_data(video_file)
assert isinstance(video, bob.bio.video.FrameContainer)
......@@ -61,7 +61,7 @@ def test_flandmark():
video_file = pkg_resources.resource_filename("bob.bio.video.test", "data/testvideo.avi")
frame_selector = bob.bio.video.FrameSelector(max_number_of_frames=3, selection_style="spread")
preprocessor = bob.bio.video.preprocessor.Preprocessor('landmark-detect', frame_selector, compressed_io=False)
preprocessor = bob.bio.video.preprocessor.Wrapper('landmark-detect', frame_selector, compressed_io=False)
video = preprocessor.read_original_data(video_file)
assert isinstance(video, bob.bio.video.FrameContainer)
......
......@@ -10,9 +10,9 @@ def test_verify_video():
# define dummy parameters
parameters = [
'-d', 'dummy-video',
'-p', 'bob.bio.video.preprocessor.Preprocessor("dummy")',
'-e', 'bob.bio.video.extractor.Extractor("dummy")',
'-a', 'bob.bio.video.algorithm.Algorithm("dummy")',
'-p', 'bob.bio.video.preprocessor.Wrapper("dummy")',
'-e', 'bob.bio.video.extractor.Wrapper("dummy")',
'-a', 'bob.bio.video.algorithm.Wrapper("dummy")',
'--zt-norm',
'-s', 'test_video',
'--temp-directory', test_dir,
......
......@@ -133,12 +133,12 @@ if sphinx.__version__ >= "1.0":
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = ''
html_logo = 'img/logo.png'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
html_favicon = ''
html_favicon = 'img/favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
......@@ -236,18 +236,24 @@ rst_epilog = ''
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'bob.bio.video', u'Bob Example Project Documentation', [u'Idiap Research Institute'], 1)
('index', 'bob.bio.video', u'Video Extensions for bob.bio', [u'Idiap Research Institute'], 1)
]
# Default processing flags for sphinx
autoclass_content = 'both'
autodoc_member_order = 'bysource'
autodoc_default_flags = ['members', 'undoc-members', 'inherited-members', 'show-inheritance']
autodoc_default_flags = ['members', 'inherited-members', 'show-inheritance']
# For inter-documentation mapping:
from bob.extension.utils import link_documentation
intersphinx_mapping = link_documentation()
intersphinx_mapping = link_documentation(['python', 'numpy', 'bob.bio.gmm', 'bob.bio.video', 'bob.bio.csu'])
def skip(app, what, name, obj, skip, options):
# Do not skip the __call__ and the __str__ functions as we have special implementations for them.
if name in ("__call__"):
return False
return skip
def setup(app):
pass
app.connect("autodoc-skip-member", skip)
======================
Implementation Details
======================
Wrapper classes
---------------
The tools implemented in this package provide wrapper classes for preprocessing, feature extraction and face recognition algorithms that are implemented in other packages of ``bob.bio``.
The basic idea is that the wrapped algorithms are provided with several frames of the video.
For this purpose, the :py:class:`bob.bio.video.utils.FrameSelector` can be applied to select one or several frames from the source video.
For each of the selected frames, the faces are aligned -- either using hand-labeled data, or after detecting the faces using :py:class:`bob.bio.face.preprocessor.FaceDetect`.
Afterward, features are extracted, models are enrolled using several frames per video, and the scoring procedure fuses the scores from one model and several probe frames of a probe video.
If one of the base algorithms requires training, the wrapper classes provide these information accordingly.
Hence, in this package we provide three wrapper classes:
* :py:class:`bob.bio.video.preprocessor.Wrapper`
* :py:class:`bob.bio.video.extractor.Wrapper`
* :py:class:`bob.bio.video.algorithm.Wrapper`
Each of these wrapper classes is created with a base algorithm that will do the actual preprocessing, extraction, projection, enrollment or scoring.
The base class can be specified in three different ways.
The most prominent ways will surely be to use some of the registered :ref:`bob.bio.base.resources`.
The more sophisticated way is to provide an *instance* of the wrapped class, or even a *string* that represents a constructor call of the desired object.
Finally (rarely used, though) you can provide the path of :ref:`bob.bio.base.configuration-files`.
The IO of the preprocessed frames, and extracted or projected features is provided using the :py:class:`bob.bio.video.FrameContainer` interface.
This frame container reads and writes :py:class:`bob.io.base.HDF5File`\s, where it stores information about the frames.
Additionally, it uses the IO functionality of the wrapped classes to actually write the data in the desired format.
Hence, all IO functionalities of the wrapped classes need to be able to handle :py:class:`bob.io.base.HDF5File`.
.. note::
The video extensions also integrate into the specialized scripts provided by :ref:`bob.bio.gmm <bob.bio.gmm>`.
.. _bob.bio.video.resources:
Registered Resources
--------------------
In this package we do not provide registered resources for the wrapper classes.
Hence, when you want to run an experiment using the video wrapper classes, you might want to create the wrapper classes inline:
.. code-block:: sh
./bin/verify.py --database youtube --preprocessor 'bob.bio.video.preprocessor.Wrapper("landmark-detect")' --features 'bob.bio.video.extractor.Wrapper("dct-blocks")' --algorithm 'bob.bio.video.algorithm.Wrapper("gmm")' ...
.. _bob.bio.video.databases:
Databases
~~~~~~~~~
All video databases defined here rely on the :py:class:`bob.bio.base.database.DatabaseBob` interface, which in turn uses the :ref:`verification_databases`.
After downloading and extracting the original data of the data sets, it is necessary that the scripts know, where the data was installed.
For this purpose, the ``./bin/verify.py`` script can read a special file, where those directories are stored, see :ref:`bob.bio.base.installation`.
By default, this file is located in your home directory, but you can specify another file on command line.
The other option is to change the directories directly inside the configuration files.
Here is the list of files and replacement strings for all databases that are registered as resource, in alphabetical order:
* MOBIO: ``'mobio-video'``
- Videos: ``[YOUR_MOBIO_VIDEO_DIRECTORY]``
* Youtube: ``'youtube'``
- Frames : ``[YOUR_YOUTUBE_DIRECTORY]``
.. note::
You can choose any of the frame databases, i.e., the ``frames_images_DB`` directory containing the original data, or the ``aligned_images_DB`` containing pre-cropped faces.
You can use the ``./bin/databases.py`` script to list, which data directories are correctly set up.
.. _bob.bio.video.implemented:
==================================
Tools implemented in bob.bio.video
==================================
Summary
-------
.. autosummary::
bob.bio.video.FrameSelector
bob.bio.video.FrameContainer
bob.bio.video.preprocessor.Wrapper
bob.bio.video.extractor.Wrapper
bob.bio.video.algorithm.Wrapper
Details
-------
.. automodule:: bob.bio.video
.. automodule:: bob.bio.video.preprocessor
.. automodule:: bob.bio.video.extractor
.. automodule:: bob.bio.video.algorithm
......@@ -8,8 +8,49 @@
Run Video Face Recognition Experiments
========================================
This package is part of the ``bob.bio`` packages, which provide open source tools to run comparable and reproducible biometric recognition experiments.
In this package, tools to run video face recognition experiments are provided.
So far, a single set of tools is available, which are meta-classes that allow to use other well-established face recognition algorithms on video data.
Package Documentation
---------------------
For more detailed information about the structure of the ``bob.bio`` packages, please refer to the documentation of :ref:`bob.bio.base <bob.bio.base>`.
Particularly, the installation of this and other ``bob.bio`` packages, please read the :ref:`bob.bio.base.installation`.
.. automodule:: bob.bio.video
In the following, we provide more detailed information about the particularities of this package only.
===========
Users Guide
===========
.. toctree::
:maxdepth: 2
implementation
================
Reference Manual
================
.. toctree::
:maxdepth: 2
implemented
=========
ToDo-List
=========
This documentation is still under development.
Here is a list of things that needs to be done:
.. todolist::
==================
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
.. todolist::
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment