Commit 44e0aad5 authored by André Anjos's avatar André Anjos 💬

Make it up-to-speed

parent 05346f9a
......@@ -19,3 +19,5 @@ build
src/
logs/
*.sql3
bob/db/hci_tagging/data/
bob/db/hci_tagging/protocols/
# This build file heavily uses template features from YAML so it is generic
# enough for any Bob project. Don't modify it unless you know what you're
# doing.
# Definition of our build pipeline
stages:
- build
- test
- docs
- wheels
- deploy
# ---------
# Templates
# ---------
# Template for the build stage
# Needs to run on all supported architectures, platforms and python versions
.build_template: &build_job
stage: build
before_script:
- git clean -ffdx
- mkdir _ci
- curl --silent "https://gitlab.idiap.ch/bob/bob.admin/raw/master/gitlab/install.sh" > _ci/install.sh
- chmod 755 _ci/install.sh
- ./_ci/install.sh _ci #updates
- ./_ci/before_build.sh
script:
- ./_ci/build.sh hci_tagging
after_script:
- ./_ci/after_build.sh
artifacts:
expire_in: 1 week
paths:
- _ci/
- dist/
- sphinx/
# Template for the test stage - re-installs from uploaded wheels
# Needs to run on all supported architectures, platforms and python versions
.test_template: &test_job
stage: test
before_script:
- ./_ci/install.sh _ci #updates
- ./_ci/before_test.sh
script:
- ./_ci/test.sh
after_script:
- ./_ci/after_test.sh
# Template for the wheel uploading stage
# Needs to run against one supported architecture, platform and python version
.wheels_template: &wheels_job
stage: wheels
environment: intranet
only:
- master
- /^v\d+\.\d+\.\d+([abc]\d*)?$/ # PEP-440 compliant version (tags)
before_script:
- ./_ci/install.sh _ci #updates
- ./_ci/before_wheels.sh
script:
- ./_ci/wheels.sh
after_script:
- ./_ci/after_wheels.sh
# Template for (latest) documentation upload stage
# Only one real job needs to do this
.docs_template: &docs_job
stage: docs
environment: intranet
only:
- master
before_script:
- ./_ci/install.sh _ci #updates
- ./_ci/before_docs.sh
script:
- ./_ci/docs.sh
after_script:
- ./_ci/after_docs.sh
# Template for the deployment stage - re-installs from uploaded wheels
# Needs to run on a single architecture only
# Will deploy your package to PyPI and other required services
# Only runs for tags
.deploy_template: &deploy_job
stage: deploy
environment: internet
only:
- /^v\d+\.\d+\.\d+([abc]\d*)?$/ # PEP-440 compliant version (tags)
except:
- branches
before_script:
- ./_ci/install.sh _ci #updates
- ./_ci/before_deploy.sh
script:
- ./_ci/deploy.sh
after_script:
- ./_ci/after_deploy.sh
# -------------
# Build Targets
# -------------
# Linux + Python 2.7: Builds, tests, uploads wheel and deploys (if needed)
build_linux_27:
<<: *build_job
variables: &linux_27_build_variables
PYTHON_VERSION: "2.7"
WHEEL_TAG: "py27"
tags:
- conda-linux
test_linux_27:
<<: *test_job
variables: *linux_27_build_variables
dependencies:
- build_linux_27
tags:
- conda-linux
wheels_linux_27:
<<: *wheels_job
variables: *linux_27_build_variables
dependencies:
- build_linux_27
tags:
- conda-linux
deploy_linux_27:
<<: *deploy_job
variables: *linux_27_build_variables
dependencies:
- build_linux_27
tags:
- conda-linux
# Linux + Python 3.4: Builds and tests
build_linux_34:
<<: *build_job
variables: &linux_34_build_variables
PYTHON_VERSION: "3.4"
WHEEL_TAG: "py3"
tags:
- conda-linux
test_linux_34:
<<: *test_job
variables: *linux_34_build_variables
dependencies:
- build_linux_34
tags:
- conda-linux
# Linux + Python 3.5: Builds, tests and uploads wheel
build_linux_35:
<<: *build_job
variables: &linux_35_build_variables
PYTHON_VERSION: "3.5"
WHEEL_TAG: "py3"
tags:
- conda-linux
test_linux_35:
<<: *test_job
variables: *linux_35_build_variables
dependencies:
- build_linux_35
tags:
- conda-linux
wheels_linux_35:
<<: *wheels_job
variables: *linux_35_build_variables
dependencies:
- build_linux_35
tags:
- conda-linux
docs_linux_35:
<<: *docs_job
variables: *linux_35_build_variables
dependencies:
- build_linux_35
tags:
- conda-linux
# Mac OSX + Python 2.7: Builds and tests
build_macosx_27:
<<: *build_job
variables: &macosx_27_build_variables
PYTHON_VERSION: "2.7"
WHEEL_TAG: "py27"
tags:
- conda-macosx
test_macosx_27:
<<: *test_job
variables: *macosx_27_build_variables
dependencies:
- build_macosx_27
tags:
- conda-macosx
# Mac OSX + Python 3.4: Builds and tests
build_macosx_34:
<<: *build_job
variables: &macosx_34_build_variables
PYTHON_VERSION: "3.4"
WHEEL_TAG: "py3"
tags:
- conda-macosx
test_macosx_34:
<<: *test_job
variables: *macosx_34_build_variables
dependencies:
- build_macosx_34
tags:
- conda-macosx
# Mac OSX + Python 3.5: Builds and tests
build_macosx_35:
<<: *build_job
variables: &macosx_35_build_variables
PYTHON_VERSION: "3.5"
WHEEL_TAG: "py3"
tags:
- conda-macosx
test_macosx_35:
<<: *test_job
variables: *macosx_35_build_variables
dependencies:
- build_macosx_35
tags:
- conda-macosx
Copyright (c) 2013, Andre Anjos - Idiap Research Institute
All rights reserved.
Copyright (c) 2016 Idiap Research Institute, http://www.idiap.ch/
Written by Andre Anjos <andre.anjos@idiap.ch>
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer. Redistributions in binary
form must reproduce the above copyright notice, this list of conditions and the
following disclaimer in the documentation and/or other materials provided with
the distribution. Neither the name of the Idiap Research Institute nor the
names of its contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors
may be used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
......@@ -21,4 +24,4 @@ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
\ No newline at end of file
include LICENSE README.rst bootstrap-buildout.py buildout.cfg version.txt
include README.rst bootstrap-buildout.py buildout.cfg develop.cfg LICENSE version.txt requirements.txt
recursive-include bob/db/hci_tagging *.csv *.txt *.hdf5
.. vim: set fileencoding=utf-8 :
.. Andre Anjos <andre.anjos@idiap.ch>
.. Wed 30 Sep 2015 11:03:49 CEST
.. Tue 13 Dec 18:31:43 CET 2016
===============================================
Mahnob HCI-Tagging Database Interface for Bob
===============================================
.. image:: http://img.shields.io/badge/docs-stable-yellow.png
:target: http://pythonhosted.org/bob.db.hci_tagging/index.html
.. image:: http://img.shields.io/badge/docs-latest-orange.png
:target: https://www.idiap.ch/software/bob/docs/latest/bob/bob.db.hci_tagging/master/index.html
.. image:: https://gitlab.idiap.ch/bob/bob.db.hci_tagging/badges/master/build.svg
:target: https://gitlab.idiap.ch/bob/bob.db.hci_tagging/commits/master
.. image:: https://img.shields.io/badge/gitlab-project-0000c0.svg
:target: https://gitlab.idiap.ch/bob/bob.db.hci_tagging
.. image:: http://img.shields.io/pypi/v/bob.db.hci_tagging.png
:target: https://pypi.python.org/pypi/bob.db.hci_tagging
.. image:: http://img.shields.io/pypi/dm/bob.db.hci_tagging.png
:target: https://pypi.python.org/pypi/bob.db.hci_tagging
This package contains an interface for the `Mahnob HCI-Tagging dataset`_
interface. It is presently used to benchmark and test Remote
Photo-Plethysmography algorithms at Idiap. This package only uses the colored
videos (from Camera 1, in AVI format) and the biological signals saved in BDF_
format.
If you decide to use this package, please consider citing `Bob`_, as a software
development environment and the authors of the dataset::
================================================
Mahnob HCI-Tagging Database Access API for Bob
================================================
@article{soleymani-2012,
author={Soleymani, M. and Lichtenauer, J. and Pun, T. and Pantic, M.},
journal={Affective Computing, IEEE Transactions on},
title={A Multimodal Database for Affect Recognition and Implicit Tagging},
year={2012},
volume={3},
number={1},
pages={42-55},
doi={10.1109/T-AFFC.2011.25},
month=Jan,
}
This package is part of the signal-processing and machine learning toolbox
Bob_. It contains an interface for the evaluation protocols of the `Mahnob
HCI-Tagging Dataset`_. Notice this package does not contain the raw data files
from this dataset, which need to be obtained through the link above.
Installation
------------
To install this package -- alone or together with other `Packages of Bob
<https://github.com/idiap/bob/wiki/Packages>`_ -- please read the `Installation
Instructions <https://github.com/idiap/bob/wiki/Installation>`_. For Bob_ to
be able to work properly, some dependent packages are required to be installed.
Please make sure that you have read the `Dependencies
<https://github.com/idiap/bob/wiki/Dependencies>`_ for your operating system.
Follow our `installation`_ instructions. Then, using the Python interpreter
provided by the distribution, bootstrap and buildout this package::
$ python bootstrap-buildout.py
$ ./bin/buildout
Dependencies
============
This package makes use of the following important external dependencies:
Contact
-------
* bob.ip.facedetect_: For automatically detecting faces using a boosted
classifier based on LBPs
* mne_: For estimating the heart-rate in beats-per-minute using the
Pam-Tompkins algorithm
* Python-EDF_ tools: to read physiological sensor information out of BDF
files
For questions or reporting issues to this software package, contact our
development `mailing list`_.
Usage
-----
You can read videos and sensor information out of the database using the
provided API.
Annotations
===========
This package can, optionally, *automatically* annotate the following key
aspects of the Mahnob HCI-Tagging dataset:
* Average heart-rate in beats-per-minute (BPM), using the Pam-Tompkins
algorithm as implemented by `mne`_.
* Face bounding boxes, as detected by the default detector on
`bob.ip.facedetect`_.
The annotation procedure can be launched with the following command::
$ ./bin/bob_dbmanage.py hci_tagging mkmeta
Each video, which is composed of a significant number of frames (hundreds),
takes about 5 minutes to get completely processed. If are at Idiap, you can
launch the job on the SGE queue using the following command-line::
$ ./bin/jman sub -q q1d --io-big -t 3490 `pwd`/bin/bob_dbmanage.py hci_tagging mkmeta
.. Your references go here
.. Place your references here:
.. _bob: https://www.idiap.ch/software/bob
.. _installation: https://www.idiap.ch/software/bob/install
.. _mailing list: https://www.idiap.ch/software/bob/discuss
.. _mahnob hci-tagging dataset: http://mahnob-db.eu/hci-tagging/
.. _bdf: http://www.biosemi.com/faq/file_format.htm
.. _bob.ip.facedetect: https://pypi.python.org/pypi/bob.ip.facedetect
.. _mne: https://pypi.python.org/pypi/mne
.. _python-edf: https://bitbucket.org/cleemesser/python-edf/
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
# Andre Anjos <andre.anjos@idiap.ch>
# Wed 30 Sep 2015 12:14:50 CEST
import os
from .models import *
......@@ -29,25 +27,30 @@ class Database(object):
Parameters:
protocol (str, optional): If set, can take the value of either 'cvpr14' or 'all'.
'cvpr14' subselects samples used by Li et al. on their CVPR'14 paper for
heart-rate estimation. If 'all' is set, the complete database is selected.
protocol (:py:class:`str`, optional): If set, can take the value of
either ``cvpr14`` or ``all``. ``cvpr14`` subselects samples used by Li
et al. on their CVPR``14 paper for heart-rate estimation. If ``all`` is
set, the complete database is selected.
subset (str, optional): If set, it could be either 'train', 'dev' or 'test'
or a combination of them (i.e. a list). If not set (default),
the files from all these sets are retrieved for the 'all' protocol.
Note that for 'cvpr14' protocol, this has no effect, since no training,
development and test set have been defined in this case.
subset (:py:class:`str`, optional): If set, it could be either ``train``,
``dev`` or ``test`` or a combination of them (i.e. a list). If not set
(default), the files from all these sets are retrieved for the ``all``
protocol. Note that for the ``cvpr14`` protocol, this has no effect,
since no training, development and test set have been defined in this
case.
Returns:
list: A list of :py:class:`File` objects.
Returns: A list of :py:class:`File` objects.
"""
if protocol in ('cvpr14',):
d = resource_filename(__name__, os.path.join('protocols/cvpr14', 'li_samples_cvpr14.txt'))
with open(d, 'rt') as f: sessions = f.read().split()
return [File(**k) for k in self.metadata if k['basedir'] in sessions]
if protocol in ('all'):
if not subset:
......@@ -66,5 +69,5 @@ class Database(object):
d = resource_filename(__name__, os.path.join('protocols/all', 'test.txt'))
with open(d, 'rt') as f: sessions = f.read().split()
files += [File(**k) for k in self.metadata if k['basedir'] in sessions]
return files
......@@ -85,14 +85,13 @@ def create_meta(args):
if bb and hr:
outdir = os.path.dirname(output)
if not os.path.exists(outdir): os.makedirs(outdir)
h5 = bob.io.base.HDF5File(output, 'w')
h5 = bob.io.base.HDF5File(output, 'a')
h5.create_group('face_detector')
h5.cd('face_detector')
h5.set('topleft_x', bb.topleft.x)
h5.set('topleft_y', bb.topleft.y)
h5.set('width', bb.size.x)
h5.set('height', bb.size.y)
h5.set_attribute('quality', bb.quality)
h5.set('topleft_x', bb.topleft[1])
h5.set('topleft_y', bb.topleft[0])
h5.set('width', bb.size[1])
h5.set('height', bb.size[0])
h5.cd('..')
h5.set('heartrate', hr)
h5.set_attribute('units', 'beats-per-minute', 'heartrate')
......@@ -199,6 +198,79 @@ def checkfiles(args):
return 0
def _files():
filelist = pkg_resources.resource_filename(__name__, 'files.txt')
return [k.strip() for k in open(filelist, 'rt').readlines() if k.strip()]
def upload(arguments):
"""Uploads generated metadata to the Idiap build server"""
target_file = os.path.join(arguments.destination,
arguments.name + ".tar.bz2")
# check all files exist
names = _files()
paths = [pkg_resources.resource_filename(__name__, f) for f in names]
for n,p in zip(names, paths):
if not os.path.exists(p):
raise IOError("Metadata file `%s' (path: %s) is not available. Did you run `mkmeta' before attempting to upload?" % (n, p))
# if you get here, all files are there, ready to package
print("Compressing metadata files to `%s'" % (target_file,))
# compress
import tarfile
f = tarfile.open(target_file, 'w:bz2')
for n,p in zip(names, paths): f.add(p, n)
f.close()
# set permissions for sane Idiap storage
import stat
perms = stat.S_IRUSR|stat.S_IWUSR|stat.S_IRGRP|stat.S_IWGRP|stat.S_IROTH
os.chmod(target_file, perms)
def download(arguments):
"""Downloads and uncompresses meta data generated files from Idiap"""
# check all files don't exist
names = _files()
paths = [pkg_resources.resource_filename(__name__, f) for f in names]
for n,p in zip(names, paths):
if os.path.exists(p):
if arguments.force:
os.unlink(p)
else:
raise IOError("Metadata file `%s' (path: %s) is already available. Please remove self-generated files before attempting download or --force" % (n, p))
# if you get here, all files aren't there, unpack
source_url = os.path.join(arguments.source, arguments.name + ".tar.bz2")
# download file from Idiap server, unpack and remove it
import sys, tempfile, tarfile
if sys.version_info[0] <= 2:
import urllib2 as urllib
else:
import urllib.request as urllib
try:
print ("Extracting url `%s'" %(source_url,))
u = urllib.urlopen(source_url)
f = tempfile.NamedTemporaryFile(suffix = ".tar.bz2")
open(f.name, 'wb').write(u.read())
t = tarfile.open(fileobj=f, mode='r:bz2')
t.extractall(pkg_resources.resource_filename(__name__, ''))
t.close()
f.close()
return False
except Exception as e:
print ("Error while downloading: %s" % e)
return True
class Interface(BaseInterface):
def name(self):
......@@ -263,3 +335,22 @@ class Interface(BaseInterface):
debug_parser.add_argument('--limit', dest="limit", default=0, type=int, help="Limits the number of objects to treat (defaults to '%(default)')")
debug_parser.add_argument('--self-test', dest="selftest", default=False, action='store_true', help=SUPPRESS)
debug_parser.set_defaults(func=debug) #action
# add upload command
upload_meta = upload.__doc__
upload_parser = subparsers.add_parser('upload', help=upload.__doc__)
upload_parser.add_argument("--destination",
default="/idiap/group/torch5spro/databases/latest")
upload_parser.set_defaults(func=upload)
# add download command
if 'DOCSERVER' in os.environ: USE_SERVER=os.environ['DOCSERVER']
else: USE_SERVER='https://www.idiap.ch'
download_meta = download.__doc__
download_parser = subparsers.add_parser('download',
help=download.__doc__)
download_parser.add_argument("--source",
default="%s/software/bob/databases/latest/" % USE_SERVER)
download_parser.add_argument("--force", action='store_true',
help = "Overwrite existing metadata files?")
download_parser.set_defaults(func=download)
This source diff could not be displayed because it is too large. You can view the blob instead.
......@@ -4,30 +4,28 @@
# Wed 30 Sep 2015 12:13:47 CEST
import os
import collections
import pkg_resources
import bob.db.base
import bob.io.base
import bob.ip.facedetect
from . import utils
# Some utility definitions
Point = collections.namedtuple('Point', 'y,x')
BoundingBox = collections.namedtuple('BoundingBox', 'topleft,size,quality')
class File(object):
class File(bob.db.base.File):
""" Generic file container for HCI-Tagging files
Parameters:
basedir (path): The base directory for the data
basedir (str): The base directory for the data
bdf (str): The name of the BDF file that accompanies the video file,
containing the physiological signals.
video (str): The name of the video file to be used
duration (int): The time in seconds that corresponds to the estimated
duration of the data (video and physiological signals).
......@@ -37,6 +35,7 @@ class File(object):
self.basedir = basedir
self.stem = bdf
self.path = os.path.join(self.basedir, self.stem)
self.video_stem = video
self.duration = int(duration)
......@@ -72,13 +71,40 @@ class File(object):
self.stem + (extension or self.default_extension()),
)
def load(self, directory=None, extension='.avi'):
"""Loads the video for this file entry
Parameters:
directory (str): The path to the root of the database installation. This
is the path leading to the directory ``Sessions`` of the database.
Returns:
numpy.ndarray: A 4D array of 8-bit unsigned integers corresponding to the
input video for this file in (frame,channel,y,x) notation (Bob-style).
"""
path = os.path.join(directory, self.basedir, self.video_stem + '.avi')
return bob.io.base.load(path)
def load_video(self, directory):
"""Loads the colored video file associated to this object
Keyword parameters:
Parameters:
directory (str): A directory name that will be prefixed to the returned
result.
directory
A directory name that will be prefixed to the returned result.
Returns
bob.io.video.reader: Preloaded and ready to be iterated by your code.
"""
......@@ -89,19 +115,29 @@ class File(object):
def run_face_detector(self, directory, max_frames=0):
"""Runs bob.ip.facedetect stock detector on the selected frames.
.. warning::
This method is deprecated and serves only as a development basis to
clean-up the :py:meth:`load_face_detection`, which for now relies on
HDF5 files shipped with this package. Technically, the output of this
method and the detected faces shipped should be the same as of today,
13 december 2016.
Parameters:
directory
A directory name that leads to the location the database is installed
on the local disk
directory (str): A directory name that leads to the location the database
is installed on the local disk
max_frames (int): If set, delimits the maximum number of frames to treat
from the associated video file.
from the associated video file. A value of zero (default), makes the
detector run for all frames.
Returns:
dict: A dictionary containing the detected face bounding boxes and
quality information.
dict: A dictionary where the key is the frame number and the values are
instances of :py:class:`bob.ip.facedetect.BoundingBox`.
"""
......@@ -110,13 +146,26 @@ class File(object):
if max_frames: data = data[:max_frames]
for k, frame in enumerate(data):
bb, quality = bob.ip.facedetect.detect_single_face(frame)
detections[k] = BoundingBox(Point(*bb.topleft), Point(*bb.size), quality)
detections[k] = bb
return detections
def load_face_detection(self):
"""Loads the face detection from locally stored files if they exist, fails
gracefully otherwise, returning `None`"""
"""Load bounding boxes for this file
This function loads bounding boxes for each frame of a video sequence.
Bounding boxes are loaded from the package directory and are the ones
provided with it. Bounding boxes generated from
:py:meth:`run_face_detector` (which should be exactly the same) are not
used by this method.