Commit bf2ed390 authored by Samuel GAIST's avatar Samuel GAIST Committed by Flavio TARSETTI
Browse files

[doc][experiments] Remove edition part

parent 361661c9
...@@ -112,96 +112,34 @@ same toolchain, analyzer or database are shown: ...@@ -112,96 +112,34 @@ same toolchain, analyzer or database are shown:
.. image:: img/similar.* .. image:: img/similar.*
.. _newexperiment: Sharing an experiment
---------------------
Creating a new experiment
------------------------- As with other components within the platform, all the elements that are created
within the platform are private in nature, so this means that only the user
On the main Experiments page, new experiments can be created by clicking on the that creates them have access to the information concerning that particular
``New`` button. You will immediately be prompted to select a ``Toolchain`` for object. If you *Share* an experiment, it becomes accessible by the users of the
your new experiment. Once you have specified a toolchain, a page similar to platform. You can read the sharing properties of an experiment by browsing to
that in the image below will be displayed: the ``Sharing`` tab, on the relevant data format page.
.. image:: img/new.*
.. note:: **Sharing status**
The next step in constructing a new experiment is to defined an experiment name
and, finally, configure the contents of each and every block of the selected The sharing status of an experiment is represented to the left of its name,
toolchain: in the format of an icon. A data format can be in one of these three
sharing states:
* **Datasets**: choose the database, from among the existing databases
fulfilling the toolchain requirements, and then choose the protocol among * **Private** (icon shows a single person): If an experiment is private,
the ones available in the database. In this "simplified" configuration only you can and only you can view its properties, run results, etc.
mode, the platform chooses the contents of the input dataset blocks based * **Shared** (icon shows many persons): If an experiment is shared, only
on preset configurations for particular databases and protocols. Use this people on the sharing list can view its properties and run results.
configuration mode for making sure you respect protocol usage for a given * **Public** (icon shows the globe): If an experiment is public, then
database. users and platform visitors can view its properties, run results, etc.
You may optionally click on ``Advanced`` to turn-on advanced dataset Sharing at the |project| platform is an irreversible procedure. For example,
selection mode, in which you can hand-pick the datasets to be used in each public objects cannot be made private again. If you share an object with a
dataset block. In this mode, you're responsible for selecting the user or team and change your mind, you can still delete the object, for as
appropriate dataset for each relevant block of your toolchain. You can mix long as it is not being used by you or another colleagues with access (see
and match as you like. For example, train using a particular dataset, test more information on our :ref:`faq`).
using another one.
You may reset back to "simplified" selection mode by clicking on ``Reset``.
* **Blocks**: assign one algorithm to each block, such as image
pre-processing, classifier or similarity score function. If similar blocks
exist on the toolchain, selecting an algorithm for a block will make the
platform *suggest* the same algorithm for similar blocks. This mechanism is
in place to ease algorithm selection and avoid common mistakes. You may
override platform suggestions (marked in orange) at any moment by removing
the automatically assigned algorithm and choosing another one from the
list.
The user should make sure that the correct algorithm is selected for each
block. Configurable parameters, if provided by the selected algorithms, are
dynamically added to the ``Global Parameters`` panel, to the right of the
screen.
Use that panel to setup global values which are effective for all instances
of the same algorithm on the experiment. You may, optionally, override
global values locally, by clicking on the algorithm's arrow down icon and
selecting which values, from the global parameters, to override for that
particular block.
Among local override options, you'll also find a handle to change the
environment, queue or the used number of slots (if the algorithm is
splittable) on a per-block basis. Use these options to allow the algorithm
on a specific block to run on a special queue (e.g., that makes available
more memory), a special environment (e.g., with a different backend that
contains a specific library version you need) or with more slots.
* **Analyzer**: algorithm used to evaluate the performance and generate
results. Options for this block are similar for normal blocks.
.. note::
As mentioned in the "Experiments" section of "Getting Started with BEAT" in `BEAT documentation`_, BEAT checks that connected datasets, algorithms and
analyzers produce or consume data in the right format. It only presents
options which are *compatible* with adjacent blocks.
Tip: If you reach a situation where no algorithms are available for a given
block, reset the experiment and try again, making sure the algorithms you'd
like to pick have compatible inputs and outputs respecting the adjacent
blocks.
.. note:: **Queues and Environments**
For a better understanding of queues and environments, please consult our
dedicated :ref:`backend` section.
The complete toolchain for the ``Experiment`` can be viewed on the
``Toolchain`` tab (expanded view shown below):
.. image:: img/toolchain.*
After an ``Experiment`` has been set up completely, you can save the the
experiment in the |project| platform via the blue ``Save`` button or execute
it immediately by clicking the green ``Go!`` button.
.. include:: ../links.rst .. include:: ../links.rst
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment