Commit 1145e469 authored by Manuel Günther's avatar Manuel Günther

First running version of bob.bio.csu

parents
*~
*.swp
*.pyc
bin
eggs
parts
.installed.cfg
.mr.developer.cfg
*.egg-info
src
develop-eggs
sphinx
dist
This diff is collapsed.
include README.rst bootstrap-buildout.py buildout.cfg COPYING version.txt requirements.txt
recursive-include doc *.py *.rst
.. vim: set fileencoding=utf-8 :
.. Manuel Guenther <manuel.guenther@idiap.ch>
.. Fri Sep 19 12:51:09 CEST 2014
.. image:: http://img.shields.io/badge/docs-stable-yellow.png
:target: http://pythonhosted.org/xfacereclib.extension.CSU/index.html
.. image:: http://img.shields.io/badge/docs-latest-orange.png
:target: https://www.idiap.ch/software/bob/docs/latest/bioidiap/xfacereclib.extension.CSU/master/index.html
.. image:: https://img.shields.io/badge/github-master-0000c0.png
:target: https://github.com/bioidiap/xfacereclib.extension.CSU/tree/master
.. image:: http://img.shields.io/pypi/v/xfacereclib.extension.CSU.png
:target: https://pypi.python.org/pypi/xfacereclib.extension.CSU
.. image:: http://img.shields.io/pypi/dm/xfacereclib.extension.CSU.png
:target: https://pypi.python.org/pypi/xfacereclib.extension.CSU
.. image:: https://img.shields.io/badge/original-software-a000a0.png
:target: http://www.cs.colostate.edu/facerec
===================================================================
FaceRecLib Wrapper classes for the CSU Face Recognition Resources
===================================================================
This satellite package to the FaceRecLib_ provides wrapper classes for the CSU face recognition resources, which can be downloaded from http://www.cs.colostate.edu/facerec.
Two algorithms are provided by the CSU toolkit (and also by this satellite package): the local region PCA (LRPCA) and the LDA-IR (also known as CohortLDA).
For more information about the LRPCA and the LDA-IR algorithm, please refer to the documentation on http://www.cs.colostate.edu/facerec/.
For further information about the FaceRecLib_, please read `its Documentation <http://pythonhosted.org/facereclib/index.html>`_.
On how to use this package in a face recognition experiment, please see http://pypi.python.org/pypi/xfacereclib.paper.BeFIT2012
Installation Instructions
-------------------------
The current package is just a set of wrapper classes for the CSU facerec2010 module, which is contained in the `CSU Face Recognition Resources <http://www.cs.colostate.edu/facerec>`_, where you need to download the Baseline 2011 Algorithms.
Please make sure that you have read installation instructions in the Documentation_ of this package on how to patch the original source code to work with our algorithms, before you try to go on woth this package.
.. note::
Since the original CSU resources are not Python3 compatible, this package only supports Python2.
For external dependencies of the CSU resources, please read their `README <http://www.cs.colostate.edu/facerec/algorithms/README.pdf>`__.
The FaceRecLib_ and parts of this package rely on Bob_, an open-source signal-processing and machine learning toolbox.
For Bob_ to be able to work properly, some dependent packages are required to be installed.
Please make sure that you have read the `Dependencies <https://github.com/idiap/bob/wiki/Dependencies>`_ for your operating system.
Documentation
-------------
For further documentation on this package, please read the `Stable Version <http://pythonhosted.org/xfacereclib.extension.CSU/index.html>`_ or the `Latest Version <https://www.idiap.ch/software/bob/docs/latest/bioidiap/xfacereclib.extension.CSU/master/index.html>`_ of the documentation.
For a list of tutorials on packages ob Bob_, or information on submitting issues, asking questions and starting discussions, please visit its website.
.. _bob: https://www.idiap.ch/software/bob
.. _facereclib: http://pypi.python.org/pypi/facereclib
#see http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages
__import__('pkg_resources').declare_namespace(__name__)
#see http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages
__import__('pkg_resources').declare_namespace(__name__)
from . import preprocessor
from . import extractor
from . import algorithm
from . import utils
from . import test
def get_config():
"""Returns a string containing the configuration information.
"""
import bob.extension
return bob.extension.get_config(__name__)
# gets sphinx autodoc done right - don't remove it
__all__ = [_ for _ in dir() if not _.startswith('_')]
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
# @author: Manuel Guenther <Manuel.Guenther@idiap.ch>
# @date: Mon Oct 29 09:27:59 CET 2012
#
# Copyright (C) 2011-2012 Idiap Research Institute, Martigny, Switzerland
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import facerec2010
import bob.bio.base
from .. import utils
class LDAIR (bob.bio.base.algorithm.Algorithm):
"""This class defines a wrapper for the facerec2010.baseline.lda.LRLDA class to be used as a face recognition :py:class:`facereclib.tools.Tool` in the :ref:`FaceRecLib <facereclib>`."""
def __init__(
self,
REGION_ARGS,
REGION_KEYWORDS,
multiple_model_scoring = 'max', # by default, compute the average between several models and the probe
multiple_probe_scoring = 'max' # by default, compute the average between the model and several probes
):
"""Constructor Documentation:
REGION_ARGS
The region arguments as taken from facerec2010.baseline.lda.CohortLDA_REGIONS
REGION_KEYWORDS
The region keywords as taken from facerec2010.baseline.lda.CohortLDA_KEYWORDS
multiple_model_scoring
The scoring strategy if models are enrolled from several images, see facereclib.tools.Tool for more information.
multiple_probe_scoring
The scoring strategy if a score is computed from several probe images, see facereclib.tools.Tool for more information.
"""
bob.bio.base.algorithm.Algorithm.__init__(self, multiple_model_scoring=multiple_model_scoring, multiple_probe_scoring=multiple_probe_scoring, **REGION_KEYWORDS)
self.ldair = facerec2010.baseline.lda.LRLDA(REGION_ARGS, **REGION_KEYWORDS)
self.use_cohort = 'cohort_adjust' not in REGION_ARGS[0] or REGION_ARGS[0]['cohort_adjust']
def _check_feature(self, feature):
"""Checks that the features are of the desired data type."""
assert isinstance(feature, facerec2010.baseline.common.FaceRecord)
assert hasattr(feature, "features")
def load_projector(self, projector_file):
"""This function loads the Projector from the given projector file.
This is only required when the cohort adjustment is enabled.
"""
# To avoid re-training the Projector, we load the Extractor file instead.
# This is only required when the cohort adjustment is enabled, otherwise the default parametrization of LDA-IR should be sufficient.
# Be careful, THIS IS A HACK and it might not work in all circumstances!
if self.use_cohort:
extractor_file = projector_file.replace("Projector", "Extractor")
self.ldair = utils.load_pickle(extractor_file)
def enroll(self, enroll_features):
"""Enrolls a model from features from several images by simply storing all given features."""
[self._check_feature(f) for f in enroll_features]
# just store all features (should be of type FaceRecord)
# since the given features are already in the desired format, there is nothing to do.
return enroll_features
def write_model(self, model, model_file):
"""Saves the enrolled model to file using the pickle module."""
# just dump the model to .pkl file
utils.save_pickle(model, model_file)
def read_model(self, model_file):
"""Loads an enrolled model from file using the pickle module."""
# just read the model from .pkl file
return utils.load_pickle(model_file)
# probe and model are identically stored in a .pkl file
read_probe = read_model
def score(self, model, probe):
"""Compute the score for the given model (a list of FaceRecords) and a probe (a FaceRecord)"""
if isinstance(model, list):
# compute score fusion strategy with several model features (which is implemented in the base class)
return self.score_for_multiple_models(model, probe)
else:
self._check_feature(model)
self._check_feature(probe)
return self.ldair.similarityMatrix([probe], [model])[0,0]
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
# @author: Manuel Guenther <Manuel.Guenther@idiap.ch>
# @date: Mon Oct 29 09:27:59 CET 2012
#
# Copyright (C) 2011-2012 Idiap Research Institute, Martigny, Switzerland
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import facerec2010
import bob.bio.base
import numpy
from .. import utils
class LRPCA (bob.bio.base.algorithm.Algorithm):
"""This class defines a wrapper for the facerec2010.baseline.lrpca.LRPCA class to be used as a face recognition :py:class:`facereclib.tools.Tool` in the :ref:`FaceRecLib <facereclib>`."""
def __init__(
self,
TUNING,
multiple_model_scoring = 'max', # by default, compute the average between several models and the probe
multiple_probe_scoring = 'max' # by default, compute the average between the model and several probes
):
"""Constructor Documentation:
TUNING
The tuning for the LRPCA algorithm as taken from the facerec2010.baseline.lrpca.GBU_TUNING
multiple_model_scoring
The scoring strategy if models are enrolled from several images, see facereclib.tools.Tool for more information.
multiple_probe_scoring
The scoring strategy if a score is computed from several probe images, see facereclib.tools.Tool for more information.
"""
bob.bio.base.algorithm.Algorithm.__init__(self, multiple_model_scoring=multiple_model_scoring, multiple_probe_scoring=multiple_probe_scoring, **TUNING)
# initialize LRPCA (not sure if this is really required)
self.lrpca = facerec2010.baseline.lrpca.LRPCA(**TUNING)
def _check_feature(self, feature):
"""Assures that the feature is of the desired type"""
assert isinstance(feature, numpy.ndarray)
assert feature.ndim == 1
assert feature.dtype == numpy.float64
def _check_model(self, model):
"""Assures that the model is of the desired type"""
assert isinstance(model, facerec2010.baseline.pca.FaceRecord)
assert hasattr(model, "feature")
def enroll(self, enroll_features):
"""Enrolls a model from features from several images by simply storing all given features."""
# no rule to enroll features in the LRPCA setup, so we just store all features
# create model Face records
model_records = []
for feature in enroll_features:
model_record = facerec2010.baseline.pca.FaceRecord(None,None,None)
model_record.feature = feature[:]
model_records.append(model_record)
return model_records
def save_model(self, model, model_file):
"""Saves the enrolled model to file using the pickle module."""
# just dump the model to .pkl file
utils.save_pickle(model, model_file)
def read_model(self, model_file):
"""Loads an enrolled model from file using the pickle module."""
# just read the model from .pkl file
return utils.load_pickle(model_file)
def score(self, model, probe):
"""Computes the score for the given model (a list of FaceRecords) and a probe feature (a numpy.ndarray)"""
if isinstance(model, list):
# compute score fusion strategy with several model features (which is implemented in the base class)
return self.score_for_multiple_models(model, probe)
else:
self._check_model(model)
self._check_feature(probe)
# compute score for one model and one probe
probe_record = facerec2010.baseline.pca.FaceRecord(None,None,None)
probe_record.feature = probe
return self.lrpca.similarityMatrix([probe_record], [model])[0,0]
from .LRPCA import LRPCA
from .LDAIR import LDAIR
import facerec2010
import bob.bio.csu
algorithm = bob.bio.csu.algorithm.LDAIR(
REGION_ARGS = facerec2010.baseline.lda.CohortLDA_REGIONS,
REGION_KEYWORDS = facerec2010.baseline.lda.CohortLDA_KEYWORDS
)
import facerec2010
import bob.bio.csu
algorithm = bob.bio.csu.algorithm.LRPCA(
TUNING = facerec2010.baseline.lrpca.GBU_TUNING
)
import facerec2010
import bob.bio.csu
extractor = bob.bio.csu.extractor.LDAIR(
REGION_ARGS = facerec2010.baseline.lda.CohortLDA_REGIONS,
REGION_KEYWORDS = facerec2010.baseline.lda.CohortLDA_KEYWORDS
)
import facerec2010
import bob.bio.csu
extractor = bob.bio.csu.extractor.LRPCA(
TUNING = facerec2010.baseline.lrpca.GBU_TUNING
)
import facerec2010
import bob.bio.csu
preprocessor = bob.bio.csu.preprocessor.LDAIR(
REGION_ARGS = facerec2010.baseline.lda.CohortLDA_REGIONS,
REGION_KEYWORDS = facerec2010.baseline.lda.CohortLDA_KEYWORDS
)
import facerec2010
import bob.bio.csu
preprocessor = bob.bio.csu.preprocessor.LRPCA(
TUNING = facerec2010.baseline.lrpca.GBU_TUNING
)
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
# @author: Manuel Guenther <Manuel.Guenther@idiap.ch>
# @date: Mon Oct 29 09:27:59 CET 2012
#
# Copyright (C) 2011-2012 Idiap Research Institute, Martigny, Switzerland
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import facerec2010
import pyvision
import PIL
import numpy
import bob.bio.base
import logging
logger = logging.getLogger("bob.bio.csu")
from .. import utils
class LDAIR (bob.bio.base.extractor.Extractor):
"""This class defines a wrapper for the facerec2010.baseline.lda.LRLDA class to be used as a :py:class:`facereclib.feature.Extractor` in the :ref:`FaceRecLib <facereclib>`."""
def __init__(self, REGION_ARGS, REGION_KEYWORDS):
"""Constructor Documentation:
REGION_ARGS
The region arguments as taken from facerec2010.baseline.lda.CohortLDA_REGIONS
REGION_KEYWORDS
The region keywords as taken from facerec2010.baseline.lda.CohortLDA_KEYWORDS
"""
bob.bio.base.extractor.Extractor.__init__(self, requires_training=True, split_training_data_by_client=True, **REGION_KEYWORDS)
self.ldair = facerec2010.baseline.lda.LRLDA(REGION_ARGS, **REGION_KEYWORDS)
self.layers = len(REGION_ARGS)
self.use_cohort = 'cohort_adjust' not in REGION_ARGS[0] or REGION_ARGS[0]['cohort_adjust']
# overwrite the training image list generation from the file selector
# since LRPCA needs training data to be split up into identities
self.use_training_images_sorted_by_identity = True
def _check_image(self, image):
"""Checks that the input data is in the expected format"""
assert isinstance(image, numpy.ndarray)
assert image.ndim == 3
assert image.dtype == numpy.uint8
def _py_image(self, image):
"""Generates a 4D structure used for LDA-IR feature extraction"""
pil_image = PIL.Image.new("RGB",(image.shape[2], image.shape[1]))
# TODO: Test if there is any faster method to convert the image type
for y in range(image.shape[1]):
for x in range(image.shape[2]):
# copy image content (re-order [y,x] to (x,y) and add the colors as (r,g,b))
pil_image.putpixel((x,y),(image[0,y,x], image[1,y,x], image[2,y,x]))
# convert to pyvision image
py_image = pyvision.Image(pil_image)
# generate some copies of the image
return [py_image.copy() for i in range(self.layers)]
def train(self, image_list, extractor_file):
"""Trains the LDA-IR module with the given image list and saves its result into the given extractor file using the pickle module."""
[self._check_image(image) for client_images in image_list for image in client_images]
train_count = 0
for client_index in range(len(image_list)):
# Initializes an arrayset for the data
for image in image_list[client_index]:
# create PIL image (since there are differences in the
# implementation of pyvision according to different image types)
# Additionally, PIL used pixels in (x,y) order
pyimage = self._py_image(image)
# append training data to the LDA-IR training
# (the None parameters are due to the fact that preprocessing happened before)
self.ldair.addTraining(str(client_index), pyimage, None, None, None)
train_count += 1
logger.info(" -> Training LDA-IR with %d images", train_count)
self.ldair.train()
if self.use_cohort:
logger.info(" -> Adding cohort images")
# add image cohort for score normalization
for client_images in image_list:
# Initializes an arrayset for the data
for image in client_images:
pyimage = self._py_image(image)
self.ldair.addCohort(pyimage, None, None, None)
# and write the result to file, which in this case simply used pickle
utils.save_pickle(self.ldair, extractor_file)
# remember the length of the produced feature
# self.feature_length = self.ldair.regions[0][2].lda_vecs.shape[1]
# for r in self.ldair.regions: assert r[2].lda_vecs.shape[1] == self.feature_length
def load(self, extractor_file):
"""Loads the LDA-IR from the given extractor file using the pickle module."""
# read LDA-IR extractor
self.ldair = utils.load_pickle(extractor_file)
# remember the length of the produced feature
# self.feature_length = self.ldair.regions[0][2].lda_vecs.shape[1]
# for r in self.ldair.regions: assert r[2].lda_vecs.shape[1] == self.feature_length
def __call__(self, image):
"""Projects the data using LDA-IR."""
self._check_image(image)
# create pvimage
pyimage = self._py_image(image)
# Projects the data (by creating a "Face Record"
face_record = self.ldair.getFaceRecord(pyimage, None, None, None, compute_cohort_scores = self.use_cohort)
return face_record
def write_feature(self, feature, feature_file):
"""Saves the projected LDA-IR feature to file using the pickle module."""
# write the feature to a .pkl file
# (since FaceRecord does not have a save method)
utils.save_pickle(feature, feature_file)
def read_feature(self, feature_file):
"""Reads the projected LDA-IR feature from file using the pickle module."""
# read the feature from .pkl file
return utils.load_pickle(feature_file)
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
# @author: Manuel Guenther <Manuel.Guenther@idiap.ch>
# @date: Mon Oct 29 09:27:59 CET 2012
#
# Copyright (C) 2011-2012 Idiap Research Institute, Martigny, Switzerland
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import facerec2010
import pyvision
import PIL
import numpy
import bob.bio.base
from .. import utils
import logging
logger = logging.getLogger("bob.bio.csu")
class LRPCA (bob.bio.base.extractor.Extractor):
"""This class defines a wrapper for the facerec2010.baseline.lrpca.LRPCA class to be used as a :py:class:`facereclib.feature.Extractor` in the :ref:`FaceRecLib <facereclib>`."""
def __init__(self, TUNING):
"""Constructor Documentation:
TUNING
The tuning for the LRPCA algorithm as taken from the facerec2010.baseline.lrpca.GBU_TUNING
"""
bob.bio.base.extractor.Extractor.__init__(self, requires_training=True, split_training_data_by_client=True, **TUNING)
self.lrpca = facerec2010.baseline.lrpca.LRPCA(**TUNING)
def _check_image(self, image):
assert isinstance(image, numpy.ndarray)
assert image.ndim == 2
assert image.dtype == numpy.uint8
def _py_image(self, image):
"""Converts the given image to pyvision images."""
self._check_image(image)
pil_image = PIL.Image.new("L",(image.shape[1],image.shape[0]))
# TODO: Test if there is any faster method to convert the image type
for y in range(image.shape[0]):
for x in range(image.shape[1]):
# copy image content (re-order [y,x] to (x,y))
pil_image.putpixel((x,y),image[y,x])
# convert to pyvision image
py_image = pyvision.Image(pil_image)
return py_image
def train(self, image_list, extractor_file):
"""Trains the LRPCA module with the given image list and saves the result into the given extractor file using the pickle module."""
train_count = 0
for client_index in range(len(image_list)):
for image in image_list[client_index]:
# convert the image into a data type that is understood by FaceRec2010
pyimage = self._py_image(image)
# append training data to the LRPCA training
# (the None parameters are due to the fact that preprocessing happened before)
self.lrpca.addTraining(str(client_index), pyimage, None, None, None),
train_count += 1
logger.info(" -> Training LRPCA with %d images", train_count)
self.lrpca.train()
# and write the result to file, which in this case simply used pickle
utils.save_pickle(self.lrpca, extractor_file)
def load(self, extractor_file):
"""Loads the trained LRPCA feature extractor from the given extractor file using the pickle module."""
# read LRPCA projector
self.lrpca = utils.load_pickle(extractor_file)
def __call__(self, image):
"""Projects the image data using the LRPCA projector and returns a numpy.ndarray."""
# create pvimage
pyimage = self._py_image(image)
# Projects the data by creating a "FaceRecord"
face_record = self.lrpca.getFaceRecord(pyimage, None, None, None)
# return the projected data, which is stored as a numpy.ndarray
return face_record.feature
from .LRPCA import LRPCA
from .LDAIR import LDAIR
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
# @author: Manuel Guenther <Manuel.Guenther@idiap.ch>
# @date: Mon Oct 29 09:27:59 CET 2012
#
# Copyright (C) 2011-2012 Idiap Research Institute, Martigny, Switzerland
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import facerec2010
import pyvision
import numpy
import bob.bio.base
class LDAIR (bob.bio.base.preprocessor.Preprocessor):
"""This class defines a wrapper for the facerec2010.baseline.lda.LRLDA class to be used as an image :py:class:`facereclib.preprocessing.Preprocessor` in the :ref:`FaceRecLib <facereclib>`."""
def __init__(self, REGION_ARGS, REGION_KEYWORDS):
"""Constructor Documentation:
REGION_ARGS
The region arguments as taken from facerec2010.baseline.lda.CohortLDA_REGIONS
REGION_KEYWORDS
The region keywords as taken from facerec2010.baseline.lda.CohortLDA_KEYWORDS
"""
bob.bio.base.preprocessor.Preprocessor.__init__(self, **REGION_KEYWORDS)
self.ldair = facerec2010.baseline.lda.LRLDA(REGION_ARGS, **REGION_KEYWORDS)
self.layers = len(REGION_ARGS)
def __call__(self, image, annotations):
"""Preprocesses the image using the LDA-IR preprocessor facerec2010.baseline.lda.LRLDA.preprocess"""
# assure that the eye positions are in the set of annotations
if annotations is None or 'leye' not in annotations or 'reye' not in annotations:
raise ValueError("The LDA-IR image cropping needs eye positions, but they are not given.")
if isinstance(image, numpy.ndarray):
if len(image.shape) != 3:
raise ValueError("The LDA-IR image cropping needs color images.")
image = pyvision.Image(numpy.transpose(image, (0, 2, 1)).astype(numpy.float64))
assert isinstance(image, pyvision.Image)
# Warning! Left and right eye are mixed up here!
# The ldair preprocess expects left_eye_x < right_eye_x
tiles = self.ldair.preprocess(
image,
leye = pyvision.Point(annotations['reye'][1], annotations['reye'][0]),
reye = pyvision.Point(annotations['leye'][1], annotations['leye'][0])
)
# LDAIR preprocessing spits out 4D structure, i.e., [Matrix]
# with each element of the outer list being identical
# so we just have to copy the first image
assert len(tiles) == self.layers
assert (tiles[0].asMatrix3D() == tiles[1].asMatrix3D()).all()
# Additionally, pyvision used images in (x,y)-order.
# To be consistent to the (y,x)-order in the facereclib, we have to transpose
color_image = tiles[0].asMatrix3D()
out_images = numpy.ndarray((color_image.shape[0], color_image.shape[2], color_image.shape[1]), dtype = numpy.uint8)
# iterate over color layers
for j in range(color_image.shape[0]):
out_images[j,:,:] = color_image[j].transpose()[:,:]
# WARNING! This contradicts the default way, images are written. Here, we write full color information!
return out_images
def read_original_data(self, image_file):
"""Reads the original images using functionality from pyvision."""
# we use pyvision to read the images. Hence, we don't have to struggle with conversion here
return pyvision.Image(str(image_file))
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
# @author: Manuel Guenther <Manuel.Guenther@idiap.ch>
# @date: Mon Oct 29 09:27:59 CET 2012
#
# Copyright (C) 2011-2012 Idiap Research Institute, Martigny, Switzerland
#