@@ -37,23 +37,6 @@ such as different databases and algorithms. Each experiment has its own
:ref:`toolchains` which cannot be changed after the experiment is created.
Experiments can be shared and forked, to ensure maximum re-usability.
.. note:: **Naming Convention**
Experiments are named using five values joined by a ``/`` (slash)
operator:
* **username**: indicates the author of the experiment
* **toolchain username**: indicates the author of the toolchain used for
that experiment
* **toolchain name**: indicates the name of the toolchain used for that
experiment
* **toolchain version**: indicates the version (integer starting from
``1``) of the toolchain used for the experiment
* **name**: an identifier for the object
Each tuple of these five components defines a *unique* experiment name
inside the platform. For a grasp, you may browse `publicly available
experiments`_.
Displaying an existing experiment
...
...
@@ -111,7 +94,7 @@ These icons represent the following options (from left to right):
* red cross: delete the experiment
* blue tag: rename the experiment
* gold medal: request attestation
* circular arrow: reset the experiment
* circular arrow: reset the experiment (if some of the blocks in the experiment have been ran before the platform will use the cache available for the outputs of those blocks)
* ``fork``: fork a new, editable copy of this experiment
* page: add experiment to report
* blue lens: search for similar experiments
...
...
@@ -193,22 +176,11 @@ toolchain:
results. Options for this block are similar for normal blocks.
.. note:: **Algorithms, Datasets and Blocks**
While configuring the experiment, your objective is to fill-in all
containers defined by the toolchain with valid datasets and algorithms or
analyzers. **The platform will check connected datasets, algorithms and
analyzers produce or consume data in the right format**. It only presents
options which are *compatible* with adjacent blocks.
.. note::
For example, if you chose dataset ``A`` for block ``train`` of your
experiment that outputs objects in the format ``user/format/1``, then the
algorithm running on the block following ``train``, **must** consume
``user/format/1`` on its input. Therefore, the choices for algorithms that
can run after ``train`` become limited at the moment you chose the dataset
``A``. The configuration system will *dynamically* update to take those
constraints into consideration everytime you make a selection, increasing
the global constraints for the experiment.
As it was mentioned in :ref:`beat-system-experiments-blocks`, BEAT checks that connected datasets, algorithms and
analyzers produce or consume data in the right format. It only presents
options which are *compatible* with adjacent blocks.
Tip: If you reach a situation where no algorithms are available for a given
block, reset the experiment and try again, making sure the algorithms you'd