diff --git a/doc/user/experiments/guide.rst b/doc/user/experiments/guide.rst
index 3264856e2dae61eea9f9c91d85655bbc8b16b8cd..d2492c8cc7dfbf8acb1e554bfd0a9ca76ea38702 100644
--- a/doc/user/experiments/guide.rst
+++ b/doc/user/experiments/guide.rst
@@ -31,14 +31,11 @@ The main purpose of the |project| platform is to allow researchers to construct
 Experiments. An ``Experiment`` is a specific combination of a dataset, a
 Toolchain, and a set of relevant, and appropriately parameterised, Algorithms.
 Executing an ``Experiment`` produces a set of numerical and graphical results.
-Once an ``Experiment`` has been created, it can be modified by replacing the
-toolchain algorithms, parameters, and the runtime environment each algorithm
-executes in.  (This is explained in :ref:`newexperiment` section.)
 
 Each experiment uses different resources available on the |project| platform
 such as different databases and algorithms. Each experiment has its own
 :ref:`toolchains` which cannot be changed after the experiment is created.
-Experiments can be shared and forked, to ensures maximum re-usability.
+Experiments can be shared and forked, to ensure maximum re-usability.
 
 .. note:: **Naming Convention**
 
@@ -101,8 +98,8 @@ On the ``Execution Details`` tab , a graphical representation of the
 each block in the Toolchain, as well as information about the execution of each
 block (queuing time and execution time).
 
-Icons for several options are provided in the top-right region of the
-``Experiment`` page.  The list of icons should be similar to that shown in the
+Icons for several actions are provided in the top-right region of the
+``Experiment`` page. The list of icons should be similar to that shown in the
 image below:
 
 .. image:: img/SS_experiments_icons.*
@@ -116,14 +113,15 @@ These icons represent the following options (from left to right):
   * gold medal: request attestation
   * circular arrow: reset the experiment
   * ``fork``: fork a new, editable copy of this experiment
-  * page: add a report
+  * page: add experiment to report
   * blue lens: search for similar experiments
 
 (Placing the mouse of an icon will also display a tool-tip indicating the
 function of the icon.) The exact list of options provided will depend on what
 kind of experiment you are looking at.  For example, the ``gold medal`` will
 appear on the page only if you are permitted to request attestation for this
-particular experiment (i.e., if you are the owner of this experiment).
+particular experiment (i.e., if you are the owner of this experiment and it
+executed sucessfully).
 
 The button ``Similar experiments`` opens a new tab where experiments using the
 same toolchain, analyzer or database are shown:
@@ -143,28 +141,80 @@ that in the image below will be displayed:
 
 .. image:: img/new.*
 
-The next step in constructing a new experiment is to configure the parameters
-for the experiment.  As shown in the image, the parameters are:
+The next step in constructing a new experiment is to defined an experiment name
+and, finally, configure the contents of each and every block of the selected
+toolchain:
 
-  * **Name**: name of the experiment, containing a meaningful description
-  * **Toolchain**: already chosen in the first step
   * **Datasets**: choose the database, from among the existing databases
     fulfilling the toolchain requirements, and then choose the protocol among
-    the ones available in the database.
-  * **Blocks**: in this subset of parameters, assign one algorithm to each
-    block, such as image pre-processing, classifier or similarity score
-    function. It should be noted that for each data-set (train, development,
-    test), one algorithm should be specified. By default, the same algorithm
-    will be assigned to similar blocks applied to different subsets of the
-    database.  The user should make sure that the correct algorithm is selected
-    for each block.
+    the ones available in the database. In this "simplified" configuration
+    mode, the platform chooses the contents of the input dataset blocks based
+    on preset configurations for particular databases and protocols. Use this
+    configuration mode for making sure you respect protocol usage for a given
+    database.
+
+    You may optionally click on ``Advanced`` to turn-on advanced dataset
+    selection mode, in which you can hand-pick the datasets to be used in each
+    dataset block. In this mode, you're responsible for selecting the
+    appropriate dataset for each relevant block of your toolchain. You can mix
+    and match as you like. For example, train using a particular dataset, test
+    using another one.
+
+    You may reset back to "simplified" selection mode by clicking on ``Reset``.
+
+  * **Blocks**: assign one algorithm to each block, such as image
+    pre-processing, classifier or similarity score function. If similar blocks
+    exist on the toolchain, selecting an algorithm for a block will make the
+    platform *suggest* the same algorithm for similar blocks. This mechanism is
+    in place to ease algorithm selection and avoid common mistakes. You may
+    override platform suggestions (marked in orange) at any moment by removing
+    the automatically assigned algorithm and choosing another one from the
+    list.
+
+    The user should make sure that the correct algorithm is selected for each
+    block. Configurable parameters, if provided by the selected algorithms, are
+    dynamically added to the ``Global Parameters`` panel, to the right of the
+    screen.
+
+    Use that panel to setup global values which are effective for all instances
+    of the same algorithm on the experiment. You may, optionally, override
+    global values locally, by clicking on the algorithm's arrow down icon and
+    selecting which values, from the global parameters, to override for that
+    particular block.
+
+    Among local override options, you'll also find a handle to change the
+    environment, queue or the used number of slots (if the algorithm is
+    splittable) on a per-block basis. Use these options to allow the algorithm
+    on a specific block to run on a special queue (e.g., that makes available
+    more memory), a special environment (e.g., with a different backend that
+    contains a specific library version you need) or with more slots.
+
   * **Analyzer**: algorithm used to evaluate the performance and generate
-    results.
+    results. Options for this block are similar for normal blocks.
+
+
+.. note:: **Algorithms, Datasets and Blocks**
+
+   While configuring the experiment, your objective is to fill-in all
+   containers defined by the toolchain with valid datasets and algorithms or
+   analyzers. **The platform will check connected datasets, algorithms and
+   analyzers produce or consume data in the right format**. It only presents
+   options which are *compatible* with adjacent blocks.
+
+   For example, if you chose dataset ``A`` for block ``train`` of your
+   experiment that outputs objects in the format ``user/format/1``, then the
+   algorithm running on the block following ``train``, **must** consume
+   ``user/format/1`` on its input. Therefore, the choices for algorithms that
+   can run after ``train`` become limited at the moment you chose the dataset
+   ``A``. The configuration system will *dynamically* update to take those
+   constraints into consideration everytime you make a selection, increasing
+   the global constraints for the experiment.
+
+   Tip: If you reach a situation where no algorithms are available for a given
+   block, reset the experiment and try again, making sure the algorithms you'd
+   like to pick have compatible inputs and outputs respecting the adjancent
+   blocks.
 
-This right-half of this page consists of two tabs: ``Global Parameters`` and
-``Toolchain``.  In the ``Global Parameters`` tab, set the environment for the
-experiments, the queue and global algorithm parameters, among available
-options.
 
 .. note:: **Queues and Environments**
 
@@ -179,7 +229,7 @@ The complete toolchain for the ``Experiment`` can be viewed on the
 
 
 After an ``Experiment`` has been set up completely, you can save the the
-experiment in the |project| platform via the blue ``Save`` button, and execute
-it by clicking the green ``Go!`` button.
+experiment in the |project| platform via the blue ``Save`` button or execute
+it immediately by clicking the green ``Go!`` button.
 
 .. include:: ../links.rst
diff --git a/doc/user/experiments/script.rst b/doc/user/experiments/script.rst
index 7c3b018aeec00b8339f62095d867d7b7bfcf4cac..59c6e6d5b9047f283dc0011cb381746531cd6b59 100644
--- a/doc/user/experiments/script.rst
+++ b/doc/user/experiments/script.rst
@@ -35,7 +35,11 @@ This guide may be shot in the same video sequence or in separate ones.
 
 The user must be logged in to experience the features described below.
 
-The logged in user must have a toolchain 'tutorial/full_eigenface/1', database  'atnt/2', algorithms 'tutorial/cropping_rgb/3', algorithm 'tutorial/pca/2', algorithm 'tutorial/linear_machine_projection/4', algorithm 'tutorial/linear_machines_scoring/4', and algorithm 'tutorial/eerhter_postperf_iso/1' already presaved for the tutorial.
+The logged in user must have a toolchain 'tutorial/full_eigenface/1', database
+'atnt/2', algorithms 'tutorial/cropping_rgb/3', algorithm 'tutorial/pca/2',
+algorithm 'tutorial/linear_machine_projection/4', algorithm
+'tutorial/linear_machines_scoring/4', and algorithm
+'tutorial/eerhter_postperf_iso/1' already presaved for the tutorial.
 
 
 Script
@@ -47,39 +51,48 @@ Script
    * Action: Click on the 'Experiments' tab.
    * Outcome: 'Experiments' tab is highlighted
 
-2. Say: "Then, choose the experiment you like.  For example, we click on 'tutorial/tutorial/eigenface/1/atnt-eigenfaces-5-comp'.  we will see the toolchain and results.  To repeat the experiment, we click on the 'Fork' button."
-   * Action: Click on 'tutorial/tutorial/eigenface/1/atnt-eigenfaces-5-comp' and then
-     click on 'Fork' button.
+2. Say: "Then, choose the experiment you like.  For example, we click on
+   'tutorial/tutorial/eigenface/1/atnt-eigenfaces-5-comp'.  we will see the
+   toolchain and results.  To repeat the experiment, we click on the 'Fork'
+   button."
+
+   * Action: Click on 'tutorial/tutorial/eigenface/1/atnt-eigenfaces-5-comp'
+     and then click on 'Fork' button.
    * Outcome: an experiment editor will show.
 
-3. Say: "Certainly, we can create an experiment from scratch. To do so, just click 'New' button.
-        The homepage of 'select a toolchain' will show."
+3. Say: "Certainly, we can create an experiment from scratch. To do so, just
+   click 'New' button. The homepage of 'select a toolchain' will show."
 
-   * Action: Click back arrow button two times and then click on the 'New' button.
+   * Action: Click back arrow button two times and then click on the 'New'
+     button.
    * Outcome: The homepage of 'select a toolchain' will show.
 
-4. Say: "Then we have to select a toolchain.  For example, we choose 'tutorial/full_eigenface/1',  An experiment editor will show."
+4. Say: "Then we have to select a toolchain.  For example, we choose
+   'tutorial/full_eigenface/1',  An experiment editor will show."
 
    * Action: select 'tutorial/full_eigenface/1'
    * Outcome: An experiment editor will show.
 
-5. Say: "First, we need to name our experiment.  Here, we name it as 'my_exp_eigenface_5'"
+5. Say: "First, we need to name our experiment.  Here, we name it as
+   'my_exp_eigenface_5'"
 
    * Action: type 'my_exp_eigenface_5'in the name section
    * Outcome: 'my_exp_eigenface_5' will show.
 
-6. Say: "In Datasets section, we select 'atnt/2' for database and 'idiap_test_eyepos'
-   for protocol"
+6. Say: "In Datasets section, we select 'atnt/2' for database and
+   'idiap_test_eyepos' for protocol"
 
    * Action: select 'atnt/2' for database and 'idiap_test_eyepos'for protocol"
    * Outcome: result will show.
 
-7. Say: "In Blocks section, we select 'tutorial/cropping_rgb/3'for cropping_rgb_train, then other related cropping_rgb modules will automatically pick the same algorithm"
+7. Say: "In Blocks section, we select 'tutorial/cropping_rgb/3'for
+   cropping_rgb_train, then other related cropping_rgb modules will
+   automatically pick the same algorithm"
 
    * Action: select 'tutorial/cropping_rgb/3'for cropping_rgb_train
-   * Outcome: cropping_rgb_train, cropping_rgb_dev_templates,  cropping_rgb_dev_probes,
-              cropping_rgb_test_templates,  cropping_rgb_test_probes will automatic
-              select 'tutorial/cropping_rgb/3'.
+   * Outcome: cropping_rgb_train, cropping_rgb_dev_templates,
+     cropping_rgb_dev_probes, cropping_rgb_test_templates,
+     cropping_rgb_test_probes will automatic select 'tutorial/cropping_rgb/3'.
 
 8. Say: "In linear_machine_training, we select 'tutorial/pca/2'"
 
@@ -87,32 +100,42 @@ Script
    * Outcome: result will show.
 
 7. Say: "Then we select 'tutorial/linear_machine_projection/4' for
-         template_builder_dev. template_builder_test will automatic pick the same
-         algorithm"
+   template_builder_dev. template_builder_test will automatic pick the same
+   algorithm"
 
-   * Action: select 'tutorial/linear_machine_projection/4' for template_builder_dev
+   * Action: select 'tutorial/linear_machine_projection/4' for
+     template_builder_dev
    * Outcome: template_builder_test and template_builder_test pick
-              'tutorial/linear_machine_projection/4'.
+     'tutorial/linear_machine_projection/4'.
 
-8. Say: "We also select 'tutorial/linear_machine_projection/4' for probe_builder_dev.
-         probe_builder_test will automatic pick the same algorithm"
+8. Say: "We also select 'tutorial/linear_machine_projection/4' for
+   probe_builder_dev. probe_builder_test will automatic pick the same
+   algorithm"
 
    * Action: select 'tutorial/linear_machine_projection/4' for probe_builder_dev
    * Outcome: probe_builder_test and probe_builder_test pick
-              'tutorial/linear_machine_projection/4'.
+     'tutorial/linear_machine_projection/4'.
 
-9. Say: "For Scoring_dev, we select 'tutorial/linear_machines_scoring/4', Then the configuration window is popped up and we need to verify whether names in inputs of block are matched to inputs of algorithm.  If they are not matched, we can swrap the inputs of algorithm, otherwise we click close button to close the window"
+9. Say: "For Scoring_dev, we select 'tutorial/linear_machines_scoring/4', Then
+   the configuration window is popped up and we need to verify whether names in
+   inputs of block are matched to inputs of algorithm.  If they are not
+   matched, we can swrap the inputs of algorithm, otherwise we click close
+   button to close the window"
 
-   * Action: select 'tutorial/linear_machines_scoring/4' and then close the window
+   * Action: select 'tutorial/linear_machines_scoring/4' and then close the
+     window
    * Outcome: 'tutorial/linear_machines_scoring/4' is selected for scoring_dev.
 
-9. Say: "we repeat to select 'tutorial/linear_machines_scoring/4' for scoring_test and
-         check whether names in inputs of block are matched to inputs of algorithm in
-         the pop-up window.  If they are not matched, we swrap the inputs of algorithm,
-         otherwise we click close button to close the window"
+9. Say: "we repeat to select 'tutorial/linear_machines_scoring/4' for
+   scoring_test and check whether names in inputs of block are matched to
+   inputs of algorithm in the pop-up window.  If they are not matched, we swrap
+   the inputs of algorithm, otherwise we click close button to close the
+   window"
 
-   * Action: select 'tutorial/linear_machines_scoring/4' and then close the window
-   * Outcome: 'tutorial/linear_machines_scoring/4' is selected for scoring_test.
+   * Action: select 'tutorial/linear_machines_scoring/4' and then close the
+     window
+   * Outcome: 'tutorial/linear_machines_scoring/4' is selected for
+     scoring_test.
 
 10. Say: "In Analyzers, we select 'tutorial/eerhter_postperf_iso/1'"
 
@@ -120,13 +143,14 @@ Script
    * Outcome: 'tutorial/eerhter_postperf_iso/1' is selected for analysis.
 
 11. Say: "Some algorithms may have parameters we can define to evaluate the
-    performance.  For example, we can change the 'number-of-components' in  pca from 5
-    to 10."
+    performance.  For example, we can change the 'number-of-components' in  pca
+    from 5 to 10."
 
    * Action: set the 'number-of-components' to 10
    * Outcome: 10 is set.
 
-12. Say: "Finally, we click 'Run' to proceed the experiment.  ROC curve, EER, HTER and the time taken in each block will be reported."
+12. Say: "Finally, we click 'Run' to proceed the experiment.  ROC curve, EER,
+    HTER and the time taken in each block will be reported."
 
    * Action: Click on the Run button
    * Outcome: Results such as ROC, EER, will be displayed.
diff --git a/doc/user/faq.rst b/doc/user/faq.rst
index aa51ca171c24ef049677f0a551d8cf3864a9e880..48746c612d1eb472e846e5633aec7e00218beaec 100644
--- a/doc/user/faq.rst
+++ b/doc/user/faq.rst
@@ -223,8 +223,8 @@ Want to write code in ``<language of choice here>``? Not a problem! The
 |project| platform can handle algorithms in different programming languages
 (for as long as there is a compatible backend installed). An experiment can be
 formed by putting together the best of each world. Feature extractors written
-in Matlab, machine learning code written in Python and running on GPUs - you
-name it.
+in C, machine learning code written in Python and running on GPUs - you name
+it.
 
 That said, it is possible to run the |project| platform on any cloud provider
 and have your algorithms using whatever frameworks you'd like to. Just make
@@ -236,9 +236,25 @@ Can the platform run on a cloud provider?
 -----------------------------------------
 
 Yes, it is possible to make the |project| platform to run on Amazon EC2 (or the
-like) for as long as legal constraints are respected. Finally, it is just a
-virtualization solution. Your datasets can be stored on Amazon S3 as well. The
-only requirement is that the backend can access them transparently.
+like) for as long as legal constraints are respected. Services like this are
+just a virtualization solution. Your datasets can be stored on Amazon S3 as
+well. The only requirement is that the backend can access them transparently.
+
+
+Can I install the |project| platform on my premises? Is it free?
+----------------------------------------------------------------
+
+Yes, you can. We distribute the |project| platform code as an open-source
+project, under the `GNU Affero GPL v3 license`_. To get started, download the
+package ``beat.web`` and read its README.rst file located on its root for quick
+start instructions:
+
+.. code-block:: sh
+
+   git clone https://gitlab.idiap.ch/beat/beat.web
+
+In case of issues, please share your questions through our `development mailing
+list`_.
 
 
 .. include:: links.rst
diff --git a/doc/user/links.rst b/doc/user/links.rst
index d5dd1ee50a538bc10133f0aac820a9549054a200..0825525d4221e8d75c4d8e019bf5951f5c728033 100644
--- a/doc/user/links.rst
+++ b/doc/user/links.rst
@@ -47,3 +47,4 @@
 .. _json: https://en.wikipedia.org/wiki/JSON
 .. _numpy: http://www.numpy.org/
 .. _our gitlab repository: https://gitlab.idiap.ch/beat/
+.. _gnu affero gpl v3 license: http://www.gnu.org/licenses/agpl-3.0.en.html