Commit 4cb988a5 authored by André Anjos's avatar André Anjos 💬

Merge branch 'docedit' into 'master'

merge new documentation to master

See merge request !48
parents 7e987c53 835cad09
Pipeline #25469 passed with stages
in 41 minutes and 33 seconds
This diff is collapsed.
......@@ -34,11 +34,70 @@ with a hybrid set of algorithms that execute on different backends. Each
backend can be implemented in a different programming language and contain any
number of (pre-installed) libraries users can call on their algorithms.
This document describes the API required by such backend implementations. The
The requirements for BEAT when reading/writing data are:
* Ability to manage large and complex data
* Portability to allow the use of heterogeneous environments
Based on our experience and on these requirements, we investigated
the use of HDF5. Unfortunately, HDF5 is not convenient to handle
structures such as arrays of variable-size elements, for instance,
array of strings.
Therefore, we decided to rely on our own binary format.
This document describes the binary formats in BEAT and the API required by BEAT to handle multiple backend implementations. The
package `beat.env.python27`_ provides the *reference* Python backend
implementation based on `Python 2.7`_.
Binary Format
-------------
Our binary format does *not* contains information about the format of the data
itself, and it is hence necessary to know this format a priori. This means that
the format cannot be inferred from the content of a file.
We rely on the following fundamental C-style formats:
* int8
* int16
* int32
* int64
* uint8
* uint16
* uint32
* uint64
* float32
* float64
* complex64 (first real value, and then imaginary value)
* complex128 (first real value, and then imaginary value)
* bool (written as a byte)
* string
An element of such a basic format is written in the C-style way, using
little-endian byte ordering.
Besides, dataformats always consist of arrays or dictionary of such fundamental
formats or compound formats.
An array of elements is saved as followed. First, the shape of the array is
saved using an *uint64* value for each dimension. Next, the elements of the
arrays are saved in C-style order.
A dictionary of elements is saved as followed. First, the key are ordered
according to the lexicographic ordering. Then, the values associated to each of
these keys are saved following this ordering.
The platform is data-driven and always processes chunks of data. Therefore,
data are always written by chunks, each chunk being preceded by a text-formated
header indicated the start- and end- indices followed by the size (in bytes) of
the chunck.
Considering the Python backend of the platform, this binary format has been
successfully implemented using the ``struct`` module.
Filesystem Organization
-----------------------
......
.. vim: set fileencoding=utf-8 :
.. Copyright (c) 2016 Idiap Research Institute, http://www.idiap.ch/ ..
.. Contact: beat.support@idiap.ch ..
.. ..
.. This file is part of the beat.core module of the BEAT platform. ..
.. ..
.. Commercial License Usage ..
.. Licensees holding valid commercial BEAT licenses may use this file in ..
.. accordance with the terms contained in a written agreement between you ..
.. and Idiap. For further information contact tto@idiap.ch ..
.. ..
.. Alternatively, this file may be used under the terms of the GNU Affero ..
.. Public License version 3 as published by the Free Software and appearing ..
.. in the file LICENSE.AGPL included in the packaging of this file. ..
.. The BEAT platform is distributed in the hope that it will be useful, but ..
.. WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY ..
.. or FITNESS FOR A PARTICULAR PURPOSE. ..
.. ..
.. You should have received a copy of the GNU Affero Public License along ..
.. with the BEAT platform. If not, see http://www.gnu.org/licenses/. ..
==========
Databases
==========
A database is a collection of data files, one for each output of the database.
This data are inputs to the BEAT toolchains. Therefore, it is important to
define evaluation protocols, which describe how a specific system must use the
raw data of a given database.
For instance, a recognition system will typically use a subset of the data to
train a recognition `model`, while another subset of data will be used to
evaluate the performance of this model.
Structure of a database
-----------------------
A database has the following structure on disk::
database_name/
output1_name.data
output2_name.data
...
outputN_name.data
For a given database, the BEAT platform will typically stores information
about the root folder containing this raw data as well as a description of
it.
Evaluation protocols
--------------------
A BEAT evaluation protocol consists of several ``datasets``, each datasets
having several ``outputs`` with well-defined data formats. In practice,
each dataset will typically be used for a different purpose.
For instance, in the case of a simple face recognition protocol, the
database may be split into three datasets: one for training, one for enrolling
client-specific model, and one for testing these models.
The training dataset may have two outputs: grayscale images as two-dimensional
array of type `uint8` and client id as `uint64` integers.
The BEAT platform is data-driven, which means that all the outputs of a given
dataset are synchronized. The way the data is generated by each template
is defined in a piece of code called the ``database view``. It is important
that a database view has a deterministic behavior for reproducibility
purposes.
Database set templates
----------------------
In practice, different databases used for the same purpose may have the exact
same datasets with the exact same outputs (and attached data formats). In this
case, it is interesting to abstract the definition of the database sets from
a given database. BEAT defines ``database set templates`` for this purpose.
For instance, the simple face recognition evaluation protocol described above,
which consists of three datasets and few inputs may be abstracted in a
database set template. This template defines both the datasets, their outputs
as well as their corresponding data formats. Next, if several databases
implements such a protocol, they may rely on the same `database set template`.
Similarly, evaluation protocols testing different conditions (such as
enrolling on clean and testing on clean data vs. enrolling on clean and
testing on noisy data) may rely on the same database set template.
In practice, this reduces the amount of work to integrate new databases and/or
new evaluation protocols into the platform. Besides, at the experiment level,
this allows to re-use a toolchain on a different database, with almost no
configuration changes from the user.
This diff is collapsed.
......@@ -20,6 +20,7 @@
.. You should have received a copy of the GNU Affero Public License along ..
.. with the BEAT platform. If not, see http://www.gnu.org/licenses/. ..
.. _beat-core-local-development:
===================
Local Development
......@@ -30,11 +31,11 @@ Go through the following sequence of commands:
.. code-block:: sh
$ git co https://gitlab.idiap.ch/bob/bob.admin
$ git checkout https://gitlab.idiap.ch/bob/bob.admin
$ #install miniconda (version 4.4 or above required)
$ conda activate base
$ cd beat.backend.python #cd into this package's sources
$ ../bob.admin/conda/conda-bootstrap.py --overwrite --python=2.7 beat-core-dev
$ ../bob.admin/conda/conda-bootstrap.py --overwrite --python=3.6 beat-core-dev
$ conda activate beat-core-dev
$ #n.b.: docker must be installed on your system (see next section)
$ buildout -c develop.cfg
......@@ -67,17 +68,29 @@ execute algorithms or experiments.
We use specific docker images to run user algorithms. Download the following
base images before you try to run tests or experiments on your computer::
$ docker pull docker.idiap.ch/beat/beat.env.system.python:1.1.2
$ docker pull docker.idiap.ch/beat/beat.env.db.examples:1.1.1
$ docker pull docker.idiap.ch/beat/beat.env.client:1.2.0
$ docker pull docker.idiap.ch/beat/beat.env.cxx:1.0.2
$ docker pull docker.idiap.ch/beat/beat.env.system.python:1.3.0
$ docker pull docker.idiap.ch/beat/beat.env.db.examples:1.4.0
$ docker pull docker.idiap.ch/beat/beat.env.client:2.0.0
$ docker pull docker.idiap.ch/beat/beat.env.cxx:2.0.0
Optionally, also download the following images to be able to re-run experiments
downloaded from the BEAT platform (not required for unit testing)::
downloaded from the BEAT platform (not required for unit testing). These docker
images corresponds to the python environment available on the platform. Keep in
mind that at the moment you cannot use different environments to run each block
when you are using BEAT locally (meaning not using the Docker executor)::
$ docker pull docker.idiap.ch/beat/beat.env.python:0.0.4
$ docker pull docker.idiap.ch/beat/beat.env.python:1.0.0
$ docker pull docker.idiap.ch/beat/beat.env.db:1.2.2
$ docker pull docker.idiap.ch/beat/beat.env.python:1.1.0
$ docker pull docker.idiap.ch/beat/beat.env.python:2.0.0
$ docker pull docker.idiap.ch/beat/beat.env.db:1.4.0
Before pulling these images, you should check the registry as there might have
been new release (i.e. rX versions).
To run an experiment using docker you should specify the docker image when defining the experiment, then use the `--docker` flag when using `beat.cmdline`::
$ beat experiment run --docker <experiment name>
You can find more information about running experiments locally using different executors in `here <https://www.idiap.ch/software/beat/docs/beat/docs/master/beat.cmdline/doc/experiments.html#how-to-run-an-experiment>`_.
Documentation
......@@ -90,6 +103,7 @@ To build the documentation, just do:
$ ./bin/sphinx-build doc sphinx
Testing
-------
......@@ -103,18 +117,18 @@ use ``nose``:
.. note::
Some of the tests for our command-line toolkit require a running BEAT
platform web-server, with a compatible ``beat.core`` installed (preferably
the same). By default, these tests will be skipped. If you want to run
them, you must setup a development web server and set the environment
variable ``BEAT_CORE_TEST_PLATFORM`` to point to that address. For example::
Some of the tests for our command-line toolkit require a running BEAT
platform web-server, with a compatible ``beat.core`` installed (preferably
the same). By default, these tests will be skipped. If you want to run
them, you must setup a development web server and set the environment
variable ``BEAT_CORE_TEST_PLATFORM`` to point to that address. For example::
$ export BEAT_CORE_TEST_PLATFORM="http://example.com/platform/"
$ ./bin/nosetests -sv
$ export BEAT_CORE_TEST_PLATFORM="http://example.com/platform/"
$ ./bin/nosetests -sv
.. warning::
.. warning::
Do **NOT** run tests against a production web server.
Do **NOT** run tests against a production web server.
If you want to skip slow tests (at least those pulling stuff from our servers)
......@@ -131,15 +145,13 @@ To measure the test coverage, do the following::
Our documentation is also interspersed with test units. You can run them using
sphinx::
$ ./bin/sphinx -b doctest doc sphinx
Other bits
----------
$ ./bin/sphinx -b doctest doc sphinx
Other Bits
==========
Profiling
==========
---------
In order to profile the test code, try the following::
......@@ -154,4 +166,4 @@ This will allow you to dump and print the profiling statistics as you may find
fit.
.. _docker: https://www.docker.com/
.. include:: links.rst
.. vim: set fileencoding=utf-8 :
.. Copyright (c) 2016 Idiap Research Institute, http://www.idiap.ch/ ..
.. Contact: beat.support@idiap.ch ..
.. ..
.. This file is part of the beat.core module of the BEAT platform. ..
.. ..
.. Commercial License Usage ..
.. Licensees holding valid commercial BEAT licenses may use this file in ..
.. accordance with the terms contained in a written agreement between you ..
.. and Idiap. For further information contact tto@idiap.ch ..
.. ..
.. Alternatively, this file may be used under the terms of the GNU Affero ..
.. Public License version 3 as published by the Free Software and appearing ..
.. in the file LICENSE.AGPL included in the packaging of this file. ..
.. The BEAT platform is distributed in the hope that it will be useful, but ..
.. WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY ..
.. or FITNESS FOR A PARTICULAR PURPOSE. ..
.. ..
.. You should have received a copy of the GNU Affero Public License along ..
.. with the BEAT platform. If not, see http://www.gnu.org/licenses/. ..
.. _beat-core-experiments:
============
Experiments
============
An experiment is the reunion of algorithms, datasets, a toolchain and
parameters that allow the platform to schedule and run the prescribed recipe
to produce displayable results. Defining a BEAT experiment can be seen as
configuring the processing blocks of a toolchain, such as selecting which
database, algorithms and algorithm parameters to use.
.. _beat-core-experiments-declaration:
Declaration of an experiment
----------------------------
.. note::
One needs only to declare an experiment using those specifications when not
using the web interface (i.e. when doing local development or using the web
api). The web interface provides a user-friendly way to configure an
experiment.
An experiment is declared in a JSON file, and must contain at least the following
fields:
.. code-block:: javascript
{
"datasets": [
],
"blocks": [
],
"analyzers": [
],
"globals": [
]
}
.. _beat-core-experiments-datasets:
Declaration of the dataset(s)
-----------------------------
The dataset inputs are defined by the toolchain. However, the toolchain does
not describe which data to plug in each dataset input.
This is the role of the field `datasets` from an experiment.
For each dataset, an experiment must specify three attributes as follows:
.. code-block:: javascript
{
"datasets": [
"templates": {
"set": "templates",
"protocol": "idiap",
"database": "atnt/1"
},
...
],
...
}
The key of an experiment dataset must correspond to the desired dataset name
from the toolchain. Then, three fields must be given:
* `database`: the database name and version
* `protocol`: the protocol name
* `set`: the dataset name of this database to associate to this toolchain
dataset
.. _beat-core-experiments-blocks:
Declaration of the block(s)
---------------------------
The blocks are defined by the toolchain. However, the toolchain does not
describe which algorithm to run in each processing block, and how each of these
algorithms are parametrized.
This is the role of the field `blocks` from an experiment.
For each block, an experiment must specify four attributes as follows:
.. code-block:: javascript
{
"blocks": {
"linear_machine_training": {
"inputs": {
"image": "image"
},
"parameters": {},
"algorithm": "tutorial/pca/1",
"outputs": {
"subspace": "subspace"
}
},
...
},
...
}
The key of an experiment block must correspond to the desired block from the
toolchain. Then, four fields must be given:
* `algorithm`: the algorithm to use (author_name/algorithm_name/version)
* `inputs`: the list of inputs. The key is the algorithm input, while the
value is the corresponding toolchain input.
* `outputs`: the list of outputs. The key is the algorithm output, while the
value is the corresponding toolchain output.
* `parameters`: the algorithm parameters to use for this processing block
.. note::
When setting an algorithm in a processing block, this will also set the
dataformats of the outputs (and inputs) of this block. In particular,
this has an impact on all the inputs of blocks connected to those outputs,
which must have the same data formats (or be an extension of these data
formats). The platform automatically validate that the data formats of
consecutive blocks are compatible.
.. _beat-core-experiments-analyzers:
Declaration of the analyzer(s)
------------------------------
Analyzers are similar to algorithms, except that they run on toolchain
endpoints. There configuration is very similar to the one of regular blocks,
except that they have no `outputs`:
.. code-block:: javascript
{
"analyzers": {
"analysis": {
"inputs": {
"scores": "scores"
},
"algorithm": "tutorial/postperf/1"
}
},
}
Global parameters
-----------------
Each block and analyzer may rely on its own local parameters. However, several
blocks may rely on the exact same parameters. In this case, it is more
convenient to define those globally.
For an experiment, this is achieved using the `globals` field in its JSON
declaration. For instance:
.. code-block:: javascript
{
"globals": {
"queue": "Default",
"environment": {
"version": "0.0.3",
"name": "Scientific Python 2.7"
},
"tutorial/pca/1": {
"number-of-components": "5"
}
},
...
}
......@@ -27,20 +27,12 @@
Core BEAT components
======================
This user guide contains information about BEAT core components, defining
experiments, toolchains and user algorithms among others.
This package provides the core components of BEAT ecosystem. These core components are the building blocks of BEAT experiments that are used by all the other BEAT packages.
.. toctree::
introduction
dataformats
algorithms
libraries
toolchains
experiments
databases
io
backend_api
develop
api
......
This diff is collapsed.
.. vim: set fileencoding=utf-8 :
.. Copyright (c) 2016 Idiap Research Institute, http://www.idiap.ch/ ..
.. Contact: beat.support@idiap.ch ..
.. ..
.. This file is part of the beat.core module of the BEAT platform. ..
.. ..
.. Commercial License Usage ..
.. Licensees holding valid commercial BEAT licenses may use this file in ..
.. accordance with the terms contained in a written agreement between you ..
.. and Idiap. For further information contact tto@idiap.ch ..
.. ..
.. Alternatively, this file may be used under the terms of the GNU Affero ..
.. Public License version 3 as published by the Free Software and appearing ..
.. in the file LICENSE.AGPL included in the packaging of this file. ..
.. The BEAT platform is distributed in the hope that it will be useful, but ..
.. WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY ..
.. or FITNESS FOR A PARTICULAR PURPOSE. ..
.. ..
.. You should have received a copy of the GNU Affero Public License along ..
.. with the BEAT platform. If not, see http://www.gnu.org/licenses/. ..
.. _developerguide-io:
===============
Inputs/Outputs
===============
.. _developerguide-io-introduction:
Introduction
------------
The requirements for the platform when reading/writing data are:
* Ability to manage large and complex data
* Portability to allow the use of heterogeneous environments
Based on our experience and on these requirements, we investigated
the use of HDF5. Unfortunately, HDF5 is not convenient to handle
structures such as arrays of variable-size elements, for instance,
array of strings.
Therefore, we decided to rely on our own binary format.
.. _developerguide-io-strategy:
Binary Format
-------------
Our binary format does *not* contains information about the format of the data
itself, and it is hence necessary to know this format a priori. This means that
the format cannot be inferred from the content of a file.
We rely on the following fundamental C-style formats:
* int8
* int16
* int32
* int64
* uint8
* uint16
* uint32
* uint64
* float32
* float64
* complex64 (first real value, and then imaginary value)
* complex128 (first real value, and then imaginary value)
* bool (written as a byte)
* string
An element of such a basic format is written in the C-style way, using
little-endian byte ordering.
Besides, dataformats always consist of arrays or dictionary of such fundamental
formats or compound formats.
An array of elements is saved as followed. First, the shape of the array is
saved using an *uint64* value for each dimension. Next, the elements of the
arrays are saved in C-style order.
A dictionary of elements is saved as followed. First, the key are ordered
according to the lexicographic ordering. Then, the values associated to each of
these keys are saved following this ordering.
The platform is data-driven and always processes chunks of data. Therefore,
data are always written by chunks, each chunk being preceded by a text-formated
header indicated the start- and end- indices followed by the size (in bytes) of
the chunck.
Considering the Python backend of the platform, this binary format has been
successfully implemented using the ``struct`` module.
.. vim: set fileencoding=utf-8 :
.. Copyright (c) 2016 Idiap Research Institute, http://www.idiap.ch/ ..
.. Contact: beat.support@idiap.ch ..
.. ..
.. This file is part of the beat.core module of the BEAT platform. ..
.. ..
.. Commercial License Usage ..
.. Licensees holding valid commercial BEAT licenses may use this file in ..
.. accordance with the terms contained in a written agreement between you ..
.. and Idiap. For further information contact tto@idiap.ch ..
.. ..
.. Alternatively, this file may be used under the terms of the GNU Affero ..
.. Public License version 3 as published by the Free Software and appearing ..
.. in the file LICENSE.AGPL included in the packaging of this file. ..
.. The BEAT platform is distributed in the hope that it will be useful, but ..
.. WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY ..
.. or FITNESS FOR A PARTICULAR PURPOSE. ..
.. ..
.. You should have received a copy of the GNU Affero Public License along ..
.. with the BEAT platform. If not, see http://www.gnu.org/licenses/. ..
.. _beat-core-libraries:
==========
Libraries
==========
Algorithms are fundamental elements in the platform that formally describe how
to process data. In particular, they are always attached to a specific
processing block with a given set of inputs and outputs. When an algorithm
needs to be applied in a slightly different processing block, this may, hence,
lead to a lot of code duplication. Duplicate code is undesirable for a number
of reasons such as high maintenance cost.
To address this problem, the platform defines the concept of **libraries**.
Libraries allow users to put code required by several different algorithms
into a common location. Once done, code from a library may be used by any
algorithm as long as the algorithm mentions its dependency to it in its
JSON declaration. In addition, a library may depend on another library.
Definition
----------
Similarly to algorithms, a library consists of two folds:
* A ``JSON declaration`` indicating:
- The language in which the library is written
- Library dependencies of this library
.. code-block:: javascript
{
"uses": {
"otherlib": "user/otherlibrary/1"
},
"language": "python"
}
* ``Source code``. For the Python back-end, this may consist of any Python
function and classes, as long as dependencies are fulfilled.
.. code-block:: python
def simple_function(array):
return len([v for v in array if v != 0])
class MyLibraryClass:
def __init__(self, multiplier=37):
self.multiplier = multiplier
def function_from_my_library(value):
return value * self.multiplier
The web client of the BEAT platform provides a graphical editor for algorithm,
which simplifies its `JSON`_ declaration definition. It also includes a simple
Python code editor.
Usage
-----
To use a defined library in an algorithm or in another library, it is
sufficient to:
* Add the library dependency into the `JSON`_ declaration of the algorithm
(or of the library). The name given as a key is the one used to import
the library, while the corresponding value is the fullname, that is
`author/name/version` of the library.
.. code-block:: javascript
{
...
"uses": {
"mylib": "user/mylibrary/1"
},
...
}
* Import the library and use its desired functionalities.
.. code-block:: python
import mylib
...
array = [0, 1, 2, 3]
array_processed = mylib.simple_function(array)
.. include:: links.rst
......@@ -15,3 +15,5 @@
.. _python bindings: http://zeromq.org/bindings:python
.. _markdown: http://daringfireball.net/projects/markdown/
.. _restructuredtext: http://docutils.sourceforge.net/rst.html
.. _Getting Started with BEAT: https://www.idiap.ch/software/beat/docs/beat/docs/master/beat/introduction.html
.. _Algorithms: https://www.idiap.ch/software/beat/docs/beat/docs/master/beat/algorithms.html
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment