Commit 0dace768 authored by Zohreh MOSTAANI's avatar Zohreh MOSTAANI
Browse files

[core][doc] removing extra information in the documentation

parent 7e987c53
This diff is collapsed.
.. vim: set fileencoding=utf-8 :
.. Copyright (c) 2016 Idiap Research Institute, http://www.idiap.ch/ ..
.. Contact: beat.support@idiap.ch ..
.. ..
.. This file is part of the beat.core module of the BEAT platform. ..
.. ..
.. Commercial License Usage ..
.. Licensees holding valid commercial BEAT licenses may use this file in ..
.. accordance with the terms contained in a written agreement between you ..
.. and Idiap. For further information contact tto@idiap.ch ..
.. ..
.. Alternatively, this file may be used under the terms of the GNU Affero ..
.. Public License version 3 as published by the Free Software and appearing ..
.. in the file LICENSE.AGPL included in the packaging of this file. ..
.. The BEAT platform is distributed in the hope that it will be useful, but ..
.. WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY ..
.. or FITNESS FOR A PARTICULAR PURPOSE. ..
.. ..
.. You should have received a copy of the GNU Affero Public License along ..
.. with the BEAT platform. If not, see http://www.gnu.org/licenses/. ..
==========
Databases
==========
A database is a collection of data files, one for each output of the database.
This data are inputs to the BEAT toolchains. Therefore, it is important to
define evaluation protocols, which describe how a specific system must use the
raw data of a given database.
For instance, a recognition system will typically use a subset of the data to
train a recognition `model`, while another subset of data will be used to
evaluate the performance of this model.
Structure of a database
-----------------------
A database has the following structure on disk::
database_name/
output1_name.data
output2_name.data
...
outputN_name.data
For a given database, the BEAT platform will typically stores information
about the root folder containing this raw data as well as a description of
it.
Evaluation protocols
--------------------
A BEAT evaluation protocol consists of several ``datasets``, each datasets
having several ``outputs`` with well-defined data formats. In practice,
each dataset will typically be used for a different purpose.
For instance, in the case of a simple face recognition protocol, the
database may be split into three datasets: one for training, one for enrolling
client-specific model, and one for testing these models.
The training dataset may have two outputs: grayscale images as two-dimensional
array of type `uint8` and client id as `uint64` integers.
The BEAT platform is data-driven, which means that all the outputs of a given
dataset are synchronized. The way the data is generated by each template
is defined in a piece of code called the ``database view``. It is important
that a database view has a deterministic behavior for reproducibility
purposes.
Database set templates
----------------------
In practice, different databases used for the same purpose may have the exact
same datasets with the exact same outputs (and attached data formats). In this
case, it is interesting to abstract the definition of the database sets from
a given database. BEAT defines ``database set templates`` for this purpose.
For instance, the simple face recognition evaluation protocol described above,
which consists of three datasets and few inputs may be abstracted in a
database set template. This template defines both the datasets, their outputs
as well as their corresponding data formats. Next, if several databases
implements such a protocol, they may rely on the same `database set template`.
Similarly, evaluation protocols testing different conditions (such as
enrolling on clean and testing on clean data vs. enrolling on clean and
testing on noisy data) may rely on the same database set template.
In practice, this reduces the amount of work to integrate new databases and/or
new evaluation protocols into the platform. Besides, at the experiment level,
this allows to re-use a toolchain on a different database, with almost no
configuration changes from the user.
.. vim: set fileencoding=utf-8 :
.. Copyright (c) 2016 Idiap Research Institute, http://www.idiap.ch/ ..
.. Contact: beat.support@idiap.ch ..
.. ..
.. This file is part of the beat.core module of the BEAT platform. ..
.. ..
.. Commercial License Usage ..
.. Licensees holding valid commercial BEAT licenses may use this file in ..
.. accordance with the terms contained in a written agreement between you ..
.. and Idiap. For further information contact tto@idiap.ch ..
.. ..
.. Alternatively, this file may be used under the terms of the GNU Affero ..
.. Public License version 3 as published by the Free Software and appearing ..
.. in the file LICENSE.AGPL included in the packaging of this file. ..
.. The BEAT platform is distributed in the hope that it will be useful, but ..
.. WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY ..
.. or FITNESS FOR A PARTICULAR PURPOSE. ..
.. ..
.. You should have received a copy of the GNU Affero Public License along ..
.. with the BEAT platform. If not, see http://www.gnu.org/licenses/. ..
.. _beat-core-dataformats:
=============
Data formats
=============
Data formats formalize the interaction between algorithms and data sets, so
they can communicate data in an orderly manner. All data formats produced or
consumed by these objects must be formally declared. Two algorithms which must
directly communicate data must produce and consume the same type of data
objects.
A data format specifies a list of typed fields. An algorithm or data set
generating a block of data (via one of its outputs) **must** fill all the
fields declared in that data format. An algorithm consuming a block of data
(via one of its inputs) **must not** expect the presence of any other field
than the ones defined by the data format.
This section contains information on the definition of dataformats, its
programmatic use on Python-based language bindings.
Definition
----------
A data format is declared as a `JSON`_ object with several fields. For example,
the following declaration could represent the coordinates of a rectangular
region in an image:
.. code-block:: json
{
"x": "int32",
"y": "int32",
"width": "int32",
"height": "int32"
}
.. note::
We have chosen to define objects inside the BEAT platform using JSON
declarations as JSON files can be easily validated, transferred through
web-based APIs and provide and easy to read format for local inspection.
Each field must be named according to typical programming rules for variable
names. For example, these are valid names:
* ``my_field``
* ``_my_field``
* ``number1``
These are invalid field names:
* ``1number``
* ``my field``
The following regular expression is used to validate field names:
``^[a-zA-Z_][a-zA-Z0-9_-]*$``. In short, a field name has to start with a
letter or an underscore character and can contain, immediately after, any
number of alpha-numerical characters or underscores.
By convention, fields prefixed and suffixed with a double underscore (``__``)
are reserved and should be avoided.
The special field ``#description`` can be used to store a short description of
the declared data format and also ignored:
.. code-block:: json
{
"#description": "A rectangle in an pixeled image",
"x": "int32",
"y": "int32",
"width": "int32",
"height": "int32"
}
The ``#description`` field is ignored in practice and only used for
informational purposes.
Each field in a declaration has a well-defined type, which can be one of:
* a primitive, simple type (see :ref:`beat-core-dataformats-simple`)
* a directly nested object (see :ref:`beat-core-dataformats-complex`)
* another data format (see :ref:`beat-core-dataformats-aggregation`)
* an array (see :ref:`beat-core-dataformats-array`)
A data format can also extend another one, as explained further down (see
ref:`beat-core-dataformats-extension`).
.. _beat-core-dataformats-simple:
Simple types
------------
The following primitive data types are available in the BEAT platform:
* Integers: ``int8``, ``int16``, ``int32``, ``int64``
* Unsigned integers: ``uint8``, ``uint16``, ``uint32``, ``uint64``
* Floating-point numbers: ``float32``, ``float64``
* Complex numbers: ``complex64``, ``complex128``
* ``bool``
* ``string``
.. note::
All primitive types are implemented using their :py:mod:`numpy`
counterparts.
When determining if a block of data corresponds to a data format, the platform
will check that the value of each field can safely (without loss of precision)
be converted to the type declared by the data format. An error is generated if
you fail to follow these requirements.
For example, an ``int8`` *can* be converted, without a precision loss, to an
``int16``, but a ``float32`` **cannot** be losslessly converted to an
``int32``. In case of doubt, you can manually test for `NumPy safe-casting
rules`_ yourself in order to understand imposed restrictions. If you wish to
allow for a precision loss on your code, you must do it explicitly (`Zen of
Python`_).
.. _beat-core-dataformats-complex:
Complex types
-------------
A data format can be composed of complex objects formed by nesting other types.
The coordinates of a rectangular region in an image be represented like this:
.. code-block:: json
{
"coords": {
"x": "int32",
"y": "int32"
},
"size": {
"width": "int32",
"height": "int32"
}
}
.. _beat-core-dataformats-aggregation:
Aggregation
-----------
.. note::
Data formats are named using 3 values joined by a ``/`` (slash) separator:
the username who is the author of the dataformat, an identifier and the
object version (integer starting from 1). Here are examples of data format
names:
* ``user/my_format/1``
* ``johndoe/integers/37``
* ``mary_mary/rectangle/2``
A field can use the declaration of another data format instead of specifying
its own declaration. Consider the following data formats, on their first
version, for user ``user``:
.. code-block:: json
:caption: Two dimensional coordinates (``user/coordinates/1``)
{
"x": "int32",
"y": "int32"
}
.. code-block:: json
:caption: Two dimensional size (``user/size/1``):
{
"width": "int32",
"height": "int32"
}
Now let's aggregate both previous formats in order to declare a new data format
for describing a rectangle:
.. code-block:: json
:caption: The definition of a rectangle
{
"coords": "user/coordinates/1",
"size": "user/size/1"
}
.. _beat-core-dataformats-array:
Arrays
------
A field can be a multi-dimensional array of any other type. For instance,
consider the following example:
.. code-block:: json
{
"field1": [10, "int32"],
"field2": [10, 5, "bool"]
}
Here we declare that ``field1`` is a one-dimensional array of 10 32-bit signed
integers (``int32``), and ``field2`` is a two-dimensional array with 10 rows
and 5 columns of booleans.
.. note::
In the Python language representation of data formats, multi-dimensional
arrays are implemented using :py:class:`numpy.ndarray`'s.
An array can have as many dimensions as you want. It can also contain objects
(either declared inline, or using another data format):
.. code-block:: json
{
"inline": [10, {
"x": "int32",
"y": "int32"
}],
"imported": [10, "beat/coordinates/1"]
}
It is also possible to declare an array without specifying the number of
elements in some of its dimensions, by using a size of 0 (zero):
.. code-block:: json
{
"field1": [0, "int32"],
"field2": [0, 0, "bool"],
"field3": [10, 0, "float32"]
}
Here, ``field1`` is a one-dimensional array of 32-bit signed integers
(``int32``), ``field2`` is a two-dimensional array of booleans, and ``field3``
is a two-dimensional array of floating-point numbers (``float32``) whose the
first dimension is fixed to 10 (number of rows).
Note that the following declaration isn't valid (you can't fix a dimension if
the preceding one isn't fixed too):
.. code-block:: json
{
"error": [0, 10, "int32"]
}
.. note::
When determining if that a block of data corresponds to a data format
containing an array, the platform automatically checks that:
* the number of dimensions is correct
* the size of each declared dimension that isn't 0 is correct
* the type of each value in the array is correct
.. _beat-core-dataformats-extension:
Extensions
----------
Besides aggregation, it is possible to extend data formats through inheritance.
In practice, inheriting from a data format is the same as pasting its
declaration right on the top of the new format.
For example, one might implement a face detector algorithm and may want to
create a data format containing all the informations about a face (say its
position, its size and the position of each eye). This could be done by
extending the type ``user/rectangular_area/1`` defined earlier:
.. code-block:: json
{
"#extends": "user/rectangular_area/1",
"left_eye": "coordinates",
"right_eye": "coordinates"
}
.. _beat-core-dataformats-usage:
Python API
----------
Data formats are useful descriptions of data blocks that are consumed by
algorithmic code inside the platform. In BEAT, the user never instantiates data
formats directly. Instead, when a new object representing a data format needs
to be created, the user may just create a dictionary in which the keys are the
format field names, whereas the values are instances of the type defined for
such a field. If the type is a reference to another format, the user may nest
dictionaries so as to build objects of any complexity. When the dictionary
representing a data format is written to an algorithm output, the data is
properly validated.
This concept will become clearer when you'll read about algorithms and the way
they receive and produce data. Here is just a simple illustrative example:
.. testsetup:: test-output-write
import numpy
from beat.core.dataformat import DataFormat
from beat.core.test.mocks import MockDataSink
from beat.core.outputs import Output
dataformat = DataFormat('/not/needed', {
"x": "int32",
"y": "int32",
"width": "int32",
"height": "int32"
})
assert dataformat.valid
data_sink = MockDataSink(dataformat)
output = Output('test', data_sink)
.. testcode:: test-output-write
# suppose, for this example, `output' is provided to your algorithm
output.write({
"x": numpy.int32(10),
"y": numpy.int32(20),
"width": numpy.int32(100),
"height": numpy.int32(100),
})
.. include:: links.rst
.. vim: set fileencoding=utf-8 :
.. Copyright (c) 2016 Idiap Research Institute, http://www.idiap.ch/ ..
.. Contact: beat.support@idiap.ch ..
.. ..
.. This file is part of the beat.core module of the BEAT platform. ..
.. ..
.. Commercial License Usage ..
.. Licensees holding valid commercial BEAT licenses may use this file in ..
.. accordance with the terms contained in a written agreement between you ..
.. and Idiap. For further information contact tto@idiap.ch ..
.. ..
.. Alternatively, this file may be used under the terms of the GNU Affero ..
.. Public License version 3 as published by the Free Software and appearing ..
.. in the file LICENSE.AGPL included in the packaging of this file. ..
.. The BEAT platform is distributed in the hope that it will be useful, but ..
.. WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY ..
.. or FITNESS FOR A PARTICULAR PURPOSE. ..
.. ..
.. You should have received a copy of the GNU Affero Public License along ..
.. with the BEAT platform. If not, see http://www.gnu.org/licenses/. ..
.. _beat-core-experiments:
============
Experiments
============
An experiment is the reunion of algorithms, datasets, a toolchain and
parameters that allow the platform to schedule and run the prescribed recipe
to produce displayable results. Defining a BEAT experiment can be seen as
configuring the processing blocks of a toolchain, such as selecting which
database, algorithms and algorithm parameters to use.
.. _beat-core-experiments-declaration:
Declaration of an experiment
----------------------------
.. note::
One needs only to declare an experiment using those specifications when not
using the web interface (i.e. when doing local development or using the web
api). The web interface provides a user-friendly way to configure an
experiment.
An experiment is declared in a JSON file, and must contain at least the following
fields:
.. code-block:: javascript
{
"datasets": [
],
"blocks": [
],
"analyzers": [
],
"globals": [
]
}
.. _beat-core-experiments-datasets:
Declaration of the dataset(s)
-----------------------------
The dataset inputs are defined by the toolchain. However, the toolchain does
not describe which data to plug in each dataset input.
This is the role of the field `datasets` from an experiment.
For each dataset, an experiment must specify three attributes as follows:
.. code-block:: javascript
{
"datasets": [
"templates": {
"set": "templates",
"protocol": "idiap",
"database": "atnt/1"
},
...
],
...
}
The key of an experiment dataset must correspond to the desired dataset name
from the toolchain. Then, three fields must be given:
* `database`: the database name and version
* `protocol`: the protocol name
* `set`: the dataset name of this database to associate to this toolchain
dataset
.. _beat-core-experiments-blocks:
Declaration of the block(s)
---------------------------
The blocks are defined by the toolchain. However, the toolchain does not
describe which algorithm to run in each processing block, and how each of these
algorithms are parametrized.
This is the role of the field `blocks` from an experiment.
For each block, an experiment must specify four attributes as follows:
.. code-block:: javascript
{
"blocks": {
"linear_machine_training": {
"inputs": {
"image": "image"
},
"parameters": {},
"algorithm": "tutorial/pca/1",
"outputs": {
"subspace": "subspace"
}
},
...
},
...
}
The key of an experiment block must correspond to the desired block from the
toolchain. Then, four fields must be given:
* `algorithm`: the algorithm to use (author_name/algorithm_name/version)
* `inputs`: the list of inputs. The key is the algorithm input, while the
value is the corresponding toolchain input.
* `outputs`: the list of outputs. The key is the algorithm output, while the
value is the corresponding toolchain output.
* `parameters`: the algorithm parameters to use for this processing block
.. note::
When setting an algorithm in a processing block, this will also set the
dataformats of the outputs (and inputs) of this block. In particular,
this has an impact on all the inputs of blocks connected to those outputs,
which must have the same data formats (or be an extension of these data
formats). The platform automatically validate that the data formats of
consecutive blocks are compatible.
.. _beat-core-experiments-analyzers:
Declaration of the analyzer(s)
------------------------------
Analyzers are similar to algorithms, except that they run on toolchain
endpoints. There configuration is very similar to the one of regular blocks,
except that they have no `outputs`:
.. code-block:: javascript
{
"analyzers": {
"analysis": {
"inputs": {
"scores": "scores"
},
"algorithm": "tutorial/postperf/1"
}
},
}
Global parameters
-----------------
Each block and analyzer may rely on its own local parameters. However, several
blocks may rely on the exact same parameters. In this case, it is more
convenient to define those globally.
For an experiment, this is achieved using the `globals` field in its JSON
declaration. For instance:
.. code-block:: javascript
{
"globals": {
"queue": "Default",
"environment": {
"version": "0.0.3",
"name": "Scientific Python 2.7"
},
"tutorial/pca/1": {
"number-of-components": "5"
}
},
...
}