Commit bf2ed390 authored by Samuel GAIST's avatar Samuel GAIST Committed by Flavio TARSETTI
Browse files

[doc][experiments] Remove edition part

parent 361661c9
......@@ -112,96 +112,34 @@ same toolchain, analyzer or database are shown:
.. image:: img/similar.*
.. _newexperiment:
Creating a new experiment
On the main Experiments page, new experiments can be created by clicking on the
``New`` button. You will immediately be prompted to select a ``Toolchain`` for
your new experiment. Once you have specified a toolchain, a page similar to
that in the image below will be displayed:
.. image:: img/new.*
The next step in constructing a new experiment is to defined an experiment name
and, finally, configure the contents of each and every block of the selected
* **Datasets**: choose the database, from among the existing databases
fulfilling the toolchain requirements, and then choose the protocol among
the ones available in the database. In this "simplified" configuration
mode, the platform chooses the contents of the input dataset blocks based
on preset configurations for particular databases and protocols. Use this
configuration mode for making sure you respect protocol usage for a given
You may optionally click on ``Advanced`` to turn-on advanced dataset
selection mode, in which you can hand-pick the datasets to be used in each
dataset block. In this mode, you're responsible for selecting the
appropriate dataset for each relevant block of your toolchain. You can mix
and match as you like. For example, train using a particular dataset, test
using another one.
You may reset back to "simplified" selection mode by clicking on ``Reset``.
* **Blocks**: assign one algorithm to each block, such as image
pre-processing, classifier or similarity score function. If similar blocks
exist on the toolchain, selecting an algorithm for a block will make the
platform *suggest* the same algorithm for similar blocks. This mechanism is
in place to ease algorithm selection and avoid common mistakes. You may
override platform suggestions (marked in orange) at any moment by removing
the automatically assigned algorithm and choosing another one from the
The user should make sure that the correct algorithm is selected for each
block. Configurable parameters, if provided by the selected algorithms, are
dynamically added to the ``Global Parameters`` panel, to the right of the
Use that panel to setup global values which are effective for all instances
of the same algorithm on the experiment. You may, optionally, override
global values locally, by clicking on the algorithm's arrow down icon and
selecting which values, from the global parameters, to override for that
particular block.
Among local override options, you'll also find a handle to change the
environment, queue or the used number of slots (if the algorithm is
splittable) on a per-block basis. Use these options to allow the algorithm
on a specific block to run on a special queue (e.g., that makes available
more memory), a special environment (e.g., with a different backend that
contains a specific library version you need) or with more slots.
* **Analyzer**: algorithm used to evaluate the performance and generate
results. Options for this block are similar for normal blocks.
.. note::
As mentioned in the "Experiments" section of "Getting Started with BEAT" in `BEAT documentation`_, BEAT checks that connected datasets, algorithms and
analyzers produce or consume data in the right format. It only presents
options which are *compatible* with adjacent blocks.
Tip: If you reach a situation where no algorithms are available for a given
block, reset the experiment and try again, making sure the algorithms you'd
like to pick have compatible inputs and outputs respecting the adjacent
.. note:: **Queues and Environments**
For a better understanding of queues and environments, please consult our
dedicated :ref:`backend` section.
The complete toolchain for the ``Experiment`` can be viewed on the
``Toolchain`` tab (expanded view shown below):
.. image:: img/toolchain.*
After an ``Experiment`` has been set up completely, you can save the the
experiment in the |project| platform via the blue ``Save`` button or execute
it immediately by clicking the green ``Go!`` button.
Sharing an experiment
As with other components within the platform, all the elements that are created
within the platform are private in nature, so this means that only the user
that creates them have access to the information concerning that particular
object. If you *Share* an experiment, it becomes accessible by the users of the
platform. You can read the sharing properties of an experiment by browsing to
the ``Sharing`` tab, on the relevant data format page.
.. note:: **Sharing status**
The sharing status of an experiment is represented to the left of its name,
in the format of an icon. A data format can be in one of these three
sharing states:
* **Private** (icon shows a single person): If an experiment is private,
only you can and only you can view its properties, run results, etc.
* **Shared** (icon shows many persons): If an experiment is shared, only
people on the sharing list can view its properties and run results.
* **Public** (icon shows the globe): If an experiment is public, then
users and platform visitors can view its properties, run results, etc.
Sharing at the |project| platform is an irreversible procedure. For example,
public objects cannot be made private again. If you share an object with a
user or team and change your mind, you can still delete the object, for as
long as it is not being used by you or another colleagues with access (see
more information on our :ref:`faq`).
.. include:: ../links.rst
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment