diff --git a/doc/baselines.rst b/doc/baselines.rst
index 42858cf2c3631e18bbd5f845f719da11b3513b68..8ce69a0c0f8d976465660972895bb518360b61a2 100644
--- a/doc/baselines.rst
+++ b/doc/baselines.rst
@@ -8,61 +8,16 @@
 Executing Baseline Algorithms
 =============================
 
-The first thing you might want to do is to execute one of the baseline face recognition algorithms that are implemented in ``bob.bio``.
+.. todo::
+   Here we should:   
+     - Brief how to run an experiment
+     - Point to bob.bio.base for further explanation
+     - Show the baselines available
+     - Show the databases available
 
-Setting up your Database
-------------------------
 
-As mentioned in the documentation of :ref:`bob.bio.base <bob.bio.base>`, the image databases are not included in this package, so you have to download them.
-For example, you can easily download the images of the `AT&T database`_, for links to other utilizable image databases please read the :ref:`bob.bio.face.databases` section.
-
-By default, ``bob.bio`` does not know, where the images are located.
-Hence, before running experiments you have to specify the image database directories.
-How this is done is explained in more detail in the :ref:`bob.bio.base.installation`.
-
-
-Running Baseline Experiments
-----------------------------
-
-To run the baseline experiments, you can use the ``bob bio baseline`` script by just going to the console and typing:
-
-.. code-block:: sh
-
-   $ bob bio baseline <baseline> <database>
-
-This script is a simple wrapper for the ``verify.py`` script that is explained in more detail in :ref:`bob.bio.base.experiments`.
-The ``bob bio baseline --help`` option shows you, which other options you have.
-Here is an almost complete extract:
-
-* ``<baseline>``: The recognition algorithms that you want to execute.  
-* ``<database>``: The database and protocol you want to use.
-* ``--temp-directory``: The directory where temporary files of the experiments are put to.
-* ``--result-directory``: The directory where resulting score files of the experiments are put to.
-* ``--verbose``: Increase the verbosity level of the script.
-  By default, only the commands that are executed are printed, and the rest of the calculation runs quietly.
-  You can increase the verbosity by adding the ``--verbose`` parameter repeatedly (up to three times).
-
-Usually it is a good idea to have at least verbose level 2 (i.e., calling ``bob bio baseline --verbose --verbose``, or the short version ``bob bio baseline -vv``).
-
-
-You can find the list of readily available baselines using the ``resources.py``
-command:
-
-.. code-block:: sh
-
-    $ resources.py --types baseline
-
-
-Running in Parallel
-~~~~~~~~~~~~~~~~~~~
-
-To run the experiments in parallel, as usual you can define an SGE grid configuration, or run with parallel threads on the local machine.
-Hence, to run in the SGE grid, you can simply add the ``--grid`` command line option, without parameters.
-Similarly, to run the experiments in parallel on the local machine, simply add a ``--parallel <N>`` option, where ``<N>`` specifies the number of parallel jobs you want to execute.
-
-
-The Algorithms
---------------
+The baselines
+-------------
 
 The algorithms present an (incomplete) set of state-of-the-art face recognition algorithms. Here is the list of short-cuts:
 
@@ -85,20 +40,6 @@ The algorithms present an (incomplete) set of state-of-the-art face recognition
   - algorithm : :py:class:`bob.bio.face.algorithm.GaborJet`
 
 
-* ``plda``: *Probabilistic LDA* (PLDA) [Pri07]_ is a probabilistic generative version of the LDA, in its scalable formulation of [ESM13]_.
-  Here, we also apply it on pixel-based representations of the image, though also other features should be possible.
-
-  - preprocessor : :py:class:`bob.bio.face.preprocessor.FaceCrop`
-  - feature : :py:class:`bob.bio.base.extractor.Linearize`
-  - algorithm : :py:class:`bob.bio.base.algorithm.PLDA`
-
-* ``bic``: In the *Bayesian Intrapersonal/Extrapersonal Classifier* (BIC) [MWP98]_, a gabor-grid-graph based similarity vector is classified to be intrapersonal (i.e., both images are from the same person) or extrapersonal, as explained in [GW09]_.
-
-  - preprocessor : :py:class:`bob.bio.face.preprocessor.FaceCrop`
-  - feature : :py:class:`bob.bio.face.extractor.GridGraph`
-  - algorithm : :py:class:`bob.bio.base.algorithm.BIC`
-
-
 Further algorithms are available, when the :ref:`bob.bio.gmm <bob.bio.gmm>` package is installed:
 
 * ``gmm``: *Gaussian Mixture Models* (GMM) [MM09]_ are extracted from *Discrete Cosine Transform* (DCT) block features.
@@ -124,92 +65,3 @@ Further algorithms are available, when the :ref:`bob.bio.gmm <bob.bio.gmm>` pack
 
 .. _bob.bio.base.baseline_results:
 
-Baseline Results
-----------------
-
-Let's trigger the ``bob bio baseline`` script to run the baselines on the ATnT dataset:
-
-.. code-block:: sh
-
-  $ bob bio baseline eigenface atnt -vv -T <TEMP_DIR> -R <RESULT_DIR>
-  $ bob bio baseline lda atnt -vv -T <TEMP_DIR> -R <RESULT_DIR>
-  $ bob bio baseline gabor_graph atnt -vv -T <TEMP_DIR> -R <RESULT_DIR>
-  $ bob bio baseline gmm atnt -vv -T <TEMP_DIR> -R <RESULT_DIR>
-  $ bob bio baseline isv atnt -vv -T <TEMP_DIR> -R <RESULT_DIR>
-  $ bob bio baseline plda atnt -vv -T <TEMP_DIR> -R <RESULT_DIR>
-  $ bob bio baseline bic atnt -vv -T <TEMP_DIR> -R <RESULT_DIR>
-
-
-Then, to evaluate the results, in terms of HTER, the script ``bob bio metrics`` should be executed as the following.
-
-
-.. code-block:: sh
-
-  $ bob bio metrics <RESULT_DIR>/atnt/eigenface/Default/nonorm/scores-dev \
-                    <RESULT_DIR>/atnt/lda/Default/nonorm/scores-dev \
-                    <RESULT_DIR>/atnt/gabor_graph/Default/nonorm/scores-dev \
-                    <RESULT_DIR>/atnt/lgbphs/Default/nonorm/scores-dev \
-                    <RESULT_DIR>/atnt/gmm/Default/nonorm/scores-dev \
-                    <RESULT_DIR>/atnt/isv/Default/nonorm/scores-dev \
-                    <RESULT_DIR>/atnt/plda/Default/nonorm/scores-dev \
-                    <RESULT_DIR>/atnt/bic/Default/nonorm/scores-dev --no-evaluation
-
-
-The aforementioned script will produce in the console the HTERs below for each baseline under the ATnT database:
-
-.. table:: The HTER results of the baseline algorithms on the AT&T database
-
-  +-------------+-------------+-------------+-------------+-------------+-------------+-------------+-------------+
-  |  eigenface  |     lda     |  gaborgraph |    lgbphs   |     gmm     |     isv     |    plda     |     bic     |
-  +=============+=============+=============+=============+=============+=============+=============+=============+
-  |   9.0%      |    12.8%    |   6.0%      |    9.0%     |    1.0%     |    0.1%     |    10.8%    |    4.0%     |
-  +-------------+-------------+-------------+-------------+-------------+-------------+-------------+-------------+
-
-
-Several types of evaluation can be executed, see ``bob bio --help`` for details.
-Particularly, here we can enable ROC curves, DET plots and CMC curves.
-
-.. code-block:: sh
-
-  $ bob bio roc <RESULT_DIR>/atnt/eigenface/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/lda/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/gabor_graph/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/lgbphs/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/gmm/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/isv/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/plda/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/bic/Default/nonorm/scores-dev --no-evaluation \
-                -o ROC.pdf
-                
-  $ bob bio det <RESULT_DIR>/atnt/eigenface/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/lda/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/gabor_graph/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/lgbphs/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/gmm/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/isv/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/plda/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/bic/Default/nonorm/scores-dev --no-evaluation \
-                -o DET.pdf
-
-  $ bob bio cmc <RESULT_DIR>/atnt/eigenface/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/lda/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/gabor_graph/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/lgbphs/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/gmm/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/isv/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/plda/Default/nonorm/scores-dev \
-                <RESULT_DIR>/atnt/bic/Default/nonorm/scores-dev --no-evaluation \
-                -o CMC.pdf
-               
-
-For the `AT&T database`_ the results should be as follows:
-
-.. image:: img/ROC.png
-   :width: 35%
-.. image:: img/DET.png
-   :width: 27%
-.. image:: img/CMC.png
-   :width: 35%
-
-
-.. include:: links.rst
diff --git a/doc/implementation.rst b/doc/implementation.rst
deleted file mode 100644
index 0ab7eb87b6236d76094b30edc3210ef0b1100fe6..0000000000000000000000000000000000000000
--- a/doc/implementation.rst
+++ /dev/null
@@ -1,224 +0,0 @@
-======================
-Implementation Details
-======================
-
-Image preprocessing
--------------------
-
-Image preprocessing is an important stage for face recognition.
-In :ref:`bob.bio.face <bob.bio.face>`, several different algorithms to perform photometric enhancement of facial images are implemented.
-These algorithms rely on facial images, which are aligned according to the eye locations, and scaled to a specific image resolution.
-
-Face cropping
-~~~~~~~~~~~~~
-
-However, for most of the image databases, in the original face images the faces are not aligned, but instead the eye locations are labeled by hand.
-Hence, before the photometric enhancement algorithms can be applied, faces must be aligned according to the hand-labeled eye locations.
-This can be achieved using the :py:class:`bob.bio.face.preprocessor.FaceCrop` class.
-It will take the image and the hand-labeled eye locations and crop the face according to some parameters, which can be defined in its constructor.
-
-So, now we have a preprocessors to perform face cropping, and some preprocessors to perform photometric enhancement.
-However, we might want to have a photometric enhancement *on top of* the aligned faces.
-In theory, there are several ways to achieve this:
-
-1. Copy the face alignment code into all photometric enhancement classes.
-
-   As copying code is generally a bad choice, we drop this option.
-
-
-2. Use the face cropping as a base class and derive the photometric enhancement classes from it.
-
-   This option is worth implementing, and this was the way, the FaceRecLib_ handled preprocessing.
-   However, it required to copy code inside the configuration files.
-   This means that, when we want to run on a different image resolution, we need to change all configuration files.
-   Option 2 dropped.
-
-
-3. Provide the face cropper as parameter to the photometric enhancement classes.
-
-   This option has the advantage that the configuration has to be written only once.
-   Also, we might change the face cropper to something else later, without needing to the the photometric enhancement code later on.
-   Option 3 accepted.
-
-Now, we have a closer look into how the image preprocessing is implemented.
-Let's take the example of the :py:class:`bob.bio.face.preprocessor.TanTriggs`.
-The constructor takes a ``face_cropper`` as parameter.
-This ``face_cropper`` can be ``None``, when the images are already aligned.
-It can also be a :py:class:`bob.bio.face.preprocessor.FaceCrop` object, which is contains the information, how faces are cropped.
-The :py:class:`bob.bio.face.preprocessor.TanTriggs` algorithm will use the ``face_cropper`` to crop the face, by passing the image and the annotations to the :py:meth:`bob.bio.face.preprocessor.FaceCrop.crop_face` function, perform the photometric enhancement on the cropped image, and return the result.
-
-So far, there is no advantage of option 2 over option 3, since the parameters for face cropping still have to be specified in the configuration file.
-But now comes the clue: The third option, how a ``face_cropper`` can be passed to the constructor is as a :ref:`Resource <bob.bio.face.preprocessors>` key, such as ``'face-crop-eyes'``.
-This will load the face cropping configuration from the registered resource, which has to be generated only once.
-So, to generate a TanTriggs preprocessor that performs face cropping, you can create:
-
-.. code-block:: py
-
-   preprocessor = bob.bio.face.preprocessor.TanTriggs(face_cropper = 'face-crop-eyes')
-
-
-Face detection
-~~~~~~~~~~~~~~
-
-Alright.
-Now if you have swallowed that, there comes the next step: face detection.
-Some of the databases do neither provide hand-labeled eye locations, nor are the images pre-cropped.
-However, we want to use the same algorithms on those images as well, so we have to detect the face (and the facial landmarks), crop the face and perform a photometric enhancement.
-So, image preprocessing becomes a three stage algorithm.
-
-How to combine the two stages, image alignment and photometric enhancement, we have seen before.
-The face detector takes as an input a ``face_cropper``, where we can use the same options to select a face cropper, just that we cannot pass ``None``.
-Interestingly, the face detector itself can be used as a ``face_cropper`` inside the photometric enhancement classes.
-Hence, to generate a TanTriggs preprocessor that performs face detection, crops the face and performs photometric enhancement, you can create:
-
-.. code-block:: py
-
-   face_cropper = bob.bio.base.load_resource("face-crop-eyes", "preprocessor")
-   annotator = bob.bio.base.load_resource("facedetect-eye-estimate", "annotator")
-   face_cropper.annotator = annotator
-   preprocessor = bob.bio.face.preprocessor.TanTriggs(face_cropper=face_cropper)
-
-Or simply (using the face detector :ref:`Resource <bob.bio.face.preprocessors>`):
-
-.. code-block:: py
-
-   preprocessor = bob.bio.face.preprocessor.TanTriggs(face_cropper = 'landmark-detect')
-
-
-.. _bob.bio.face.resources:
-
-Registered Resources
---------------------
-
-.. _bob.bio.face.databases:
-
-Databases
-~~~~~~~~~
-
-One important aspect of :ref:`bob.bio.face <bob.bio.face>` is the relatively large list of supported image data sets, including well-defined evaluation protocols.
-All databases rely on the :py:class:`bob.bio.base.database.BioDatabase` interface, which in turn uses the `verification_databases <https://www.idiap.ch/software/bob/packages>`_.
-Please check the link above for information on how to obtain the original data of those data sets.
-
-After downloading and extracting the original data of the data sets, it is necessary that the scripts know, where the data was installed.
-For this purpose, the ``verify.py`` script can read a special file, where those directories are stored, see :ref:`bob.bio.base.installation`.
-By default, this file is located in your home directory, but you can specify another file on command line.
-
-The other option is to change the directories directly inside the configuration files.
-Here is the list of files and replacement strings for all databases that are registered as resource, in alphabetical order:
-
-* The AT&T database of faces: ``'atnt'``
-
-  - Images: ``[YOUR_ATNT_DIRECTORY]``
-
-* AR face: ``'arface'``
-
-  - Images: ``[YOUR_ARFACE_DIRECTORY]``
-
-* BANCA (english): ``'banca'``
-
-  - Images: [YOUR_BANCA_DIRECTORY]
-
-* CAS-PEAL: ``'caspeal'``
-
-  - Images: ``[YOUR_CAS-PEAL_DIRECTORY]``
-
-* Face Recognition Grand Challenge v2 (FRGC): ``'frgc'``
-
-  - Complete directory: ``[YOUR_FRGC_DIRECTORY]``
-
-  .. note::
-     Due to implementation details, there will be a warning, when the FRGC database resource is loaded.
-     To avoid this warning, you have to modify the FRGC database configuration file.
-
-* The Good, the Bad and the Ugly (GBU): ``'gbu'``
-
-  - Images (taken from MBGC-V1): ``[YOUR_MBGC-V1_DIRECTORY]``
-
-* Labeled Faces in the Wild (LFW): ``'lfw-restricted'``, ``'lfw-unrestricted'``
-
-  - Images (aligned with funneling): ``[YOUR_LFW_FUNNELED_DIRECTORY]``
-
-  .. note::
-     In the :ref:`bob.db.lfw <bob.db.lfw>` database interface, we provide automatically detected eye locations, which were detected on the funneled images.
-     Face cropping using these eye locations will only work with the correct images.
-     However, when using the face detector, all types of images will work.
-
-* MOBIO: ``'mobio-image'``, ``'mobio-male'`` ``'mobio-female'``
-
-  - Images (the .png images): ``[YOUR_MOBIO_IMAGE_DIRECTORY]``
-  - Annotations (eyes): ``[YOUR_MOBIO_ANNOTATION_DIRECTORY]``
-
-* Multi-PIE: ``'multipie'``, ``'multipie-pose'``
-
-  - Images: ``[YOUR_MULTI-PIE_IMAGE_DIRECTORY]``
-  - Annotations: ``[YOUR_MULTI-PIE_ANNOTATION_DIRECTORY]``
-
-* Replay Attack ``'replay-img-licit'``, ``'replay-img-spoof'``
-
-  - Complete directory: ``[YOUR_REPLAY_ATTACK_DIRECTORY]``
-
-* Replay Mobile ``'replaymobile-img-licit'``, ``'replaymobile-img-spoof'``
-
-  - Complete directory: ``[YOUR_REPLAY_MOBILE_DIRECTORY]``
-
-* SC face: ``'scface'``
-
-  - Images: ``[YOUR_SC_FACE_DIRECTORY]``
-
-* XM2VTS: ``'xm2vts'``
-
-  - Images: ``[YOUR_XM2VTS_DIRECTORY]``
-
-* FARGO: ``'fargo'``
-
-  - Images: ``[YOUR_FARGO_DIRECTORY]``
-
-You can use the ``databases.py`` script to list, which data directories are correctly set up.
-
-In order to view the annotations inside your database on top of the images, you can use the ``display_face_annotations.py`` script that is provided.
-Please see ``display_face_annotations.py --help`` for more details and a list of options.
-
-
-.. _bob.bio.face.preprocessors:
-
-Preprocessors
-~~~~~~~~~~~~~
-
-Photometric enhancement algorithms are -- by default -- registered without face cropping, as ``'base'`` (no enhancement), ``'histogram'`` (histogram equalization), ``'tan-triggs'``, ``'self-quotient'`` (self quotient image) and ``'inorm-lbp'``.
-These resources should only be used, when original images are already cropped (such as in the `AT&T database`_).
-
-The default face cropping is performed by aligning the eye locations such that the eyes (in subject perspective) are located at: right eye: ``(16, 15)``, left eye: ``(16, 48)``, and the image is cropped to resolution ``(80, 64)`` pixels.
-This cropper is registered under the resource key ``'face-crop-eyes'``.
-Based on this cropping, photometric enhancement resources have a common addition: ``'histogram-crop'``, ``'tan-triggs-crop'``, ``'self-quotient-crop'`` and ``'inorm-lbp-crop'``.
-
-For face detection, two resources are registered.
-The ``'face-detect'`` resource will detect the face and perform ``'face-crop-eyes'``, without detecting the eye locations (fixed locations are taken instead).
-Hence, the in-plane rotation of the face rotation not corrected by ``'face-detect'``.
-On the other hand, in ``'landmark-detect'``, face detection and landmark localization are performed, and the face is aligned using ``'face-crop-eyes'``.
-Photometric enhancement is only registered as resource after landmark localization: ``'histogram-landmark'``, ``'tan-triggs-landmark'``, ``'self-quotient-landmark'`` and ``'inorm-lbp-landmark'``.
-
-
-.. _bob.bio.face.extractors:
-
-Feature extractors
-~~~~~~~~~~~~~~~~~~
-
-Only four types of features are registered as resources here:
-
-* ``'dct-blocks'``: DCT blocks with 12 pixels and full overlap, extracting 35 DCT features per block
-* ``'grid-graph'``: Gabor jets in grid graphs, with 8 pixels distance between nodes
-* ``'lgbphs'``: Local Gabor binary pattern histogram sequences with block-size of 8 and no overlap
-
-.. _bob.bio.face.algorithms:
-
-Face Recognition Algorithms
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-* ``'gabor-jet'``: Compares graphs of Gabor jets with using a dedicated Gabor jet similarity function [GHW12]_
-* ``'histogram'``: Compares histograms using histogram comparison functions
-* ``'bic-jet'``: Uses the :py:class:`bob.bio.base.algorithm.BIC` with vectors of Gabor jet similarities
-
-  .. note:: One particularity of this resource is that the function to compute the feature vectors to be classified in the BIC algorithm is actually implemented *in the configuration file*.
-
-
-.. include:: links.rst
diff --git a/doc/index.rst b/doc/index.rst
index 7a6cb6865d0044602032e7649b0ff0b228ebce2e..23bae2667d36221480784e589799554622cb7168 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -4,28 +4,44 @@
 
 .. _bob.bio.face:
 
-===========================================
- Face Recognition Algorithms and Databases
-===========================================
+=============================
+ Open Source Face Recognition
+=============================
 
-This package is part of the ``bob.bio`` packages, which provide open source tools to run comparable and reproducible biometric recognition experiments.
-In this package, tools for executing face recognition experiments are provided.
+
+This package provide open source tools to run comparable and reproducible face recognition experiments.
 This includes:
 
 * Preprocessors to detect, align and photometrically enhance face images
 * Feature extractors that extract features from facial images
-* Recognition algorithms that are specialized on facial features, and
 * Facial image databases including their protocols.
+* Scripts that trains CNNs
+
+For more detailed information about how this package is structured, please refer to the documentation of :ref:`bob.bio.base <bob.bio.base>`.
+
+
+Get Started
+============
+
+The easiest way to get started is by simply comparing two faces::
+
+$ bob bio compare-samples -p gabor_graph me.png not_me.png
+
+.. warning::
+   No face detection is carried out with this command.
+
+Check out all the face recognition algorithms available by doing::
+
+$ resources.py --type p
 
-Additionally, a set of baseline algorithms are defined, which integrate well with the two other ``bob.bio`` packages:
 
-* :ref:`bob.bio.gmm <bob.bio.gmm>` defines algorithms based on Gaussian mixture models
-* :ref:`bob.bio.video <bob.bio.video>` uses face recognition algorithms in video frames
+Get Started, serious 
+====================
 
-For more detailed information about the structure of the ``bob.bio`` packages, please refer to the documentation of :ref:`bob.bio.base <bob.bio.base>`.
-Particularly, the installation of this and other ``bob.bio`` packages, please read the :ref:`bob.bio.base.installation`.
+.. todo::
 
-In the following, we provide more detailed information about the particularities of this package only.
+   Briefing about baselines
+ 
 
 Users Guide
 ===========
@@ -34,7 +50,7 @@ Users Guide
    :maxdepth: 2
 
    baselines
-   implementation
+   leaderboad
    references
    annotators
 
diff --git a/doc/leaderboard.rst b/doc/leaderboard.rst
new file mode 100644
index 0000000000000000000000000000000000000000..08c1257eff684a8c215099a5b9f39c3cbf734d9b
--- /dev/null
+++ b/doc/leaderboard.rst
@@ -0,0 +1,14 @@
+.. vim: set fileencoding=utf-8 :
+
+.. _bob.bio.face.learderboard:
+
+=============================
+Leaderboad
+=============================
+
+.. todo::
+   Here we should:   
+     - Present a custom Leaderboad per database
+     - Present a script that runs at least one experiment of this leader board
+
+