Commit 796ea79f authored by Yannick DAYER's avatar Yannick DAYER

[doc] Grammar checks

parent 46af7452
......@@ -6,7 +6,7 @@ Annotating biometric databases
It is often required to annotate the biometric samples before running
experiments. This often happens in face biometrics where each face is detected
and location of landmarks on the face is saved prior to running experiments.
and the location of landmarks on the face is saved before running experiments.
To facilitate the process of annotating a new database, this package provides
a command-line script:
......
This diff is collapsed.
......@@ -10,7 +10,7 @@
=====================================
``bob.bio.base`` provides open source tools to run comparable and reproducible biometric recognition experiments.
``bob.bio.base`` provides open-source tools to run comparable and reproducible biometric recognition experiments.
It covers the following biometrics traits:
* Face Biometrics: `bob.bio.face <http://gitlab.idiap.ch/bob/bob.bio.face>`__
......
......@@ -9,10 +9,10 @@
======================================================
The transition to the pipeline concept changed the way data goes from the raw sample to the extracted features, and how the biometric algorithm is applied.
However a set of tools was implemented to support the older bob implementations (designated as *legacy*) of database, preprocessor, extractor and algorithms.
However, a set of tools was implemented to support the older bob implementations (designated as *legacy*) of database, preprocessor, extractor, and algorithms.
This adaptation consists of wrappers classes that take a legacy bob class as input and constructs a :py:class:`Transformer` or :py:class:`BiometricAlgorithm` out of it.
This adaptation consists of wrapper classes that take a legacy bob class as input and constructs a :py:class:`Transformer` or :py:class:`BiometricAlgorithm` out of it.
.. WARNING::
......@@ -24,10 +24,10 @@ This adaptation consists of wrappers classes that take a legacy bob class as inp
Legacy FileList Database interface
----------------------------------
This is a similar database interface to :ref:`the CSV file interface <bob.bio.base.database.csv_file_interface>`, but takes information from a series of two- or three-columns files without header instead of CSV files and returns a legacy database (use a :ref:`Database Connector <bob.bio.base.legacy.database_connector>` to create a database interface).
This is a similar database interface to :ref:`the CSV file interface <bob.bio.base.database.csv_file_interface>`, but takes information from a series of two- or three-column files without header instead of CSV files and returns a legacy database (use a :ref:`Database Connector <bob.bio.base.legacy.database_connector>` to create a database interface).
The files are separated in three sets: ``'world'`` (training; optional), ``'dev'`` (development; required) and ``'eval'`` (evaluation; optional) set to be used by the biometric verification algorithm.
The files are separated into three sets: ``'world'`` (training; optional), ``'dev'`` (development; required) and ``'eval'`` (evaluation; optional) set to be used by the biometric verification algorithm.
The summarized complete structure of the list base directory (here denoted as ``basedir``) containing all the files should be like this:
.. code-block:: text
......@@ -86,7 +86,7 @@ The following list files need to be created:
* two *world files*, with default names ``train_optional_world_1.lst`` and ``train_optional_world_2.lst``, in default sub-directory ``norm``.
The format is the same as for the world file.
These files are not needed for most of biometric recognition algorithms, hence, they need to be specified only if the algorithm uses them.
These files are not needed for most biometric recognition algorithms, hence, they need to be specified only if the algorithm uses them.
- **For enrollment**:
......@@ -102,14 +102,14 @@ The following list files need to be created:
There exist two different ways to implement file lists used for scoring.
* The first (and simpler) variant is to define a file list of probe files, where all probe files will be tested against all models.
Hence, you need to specify one or two *probe files* for the development (and evaluation) set, with default name ``for_probes.lst`` in the default sub-directories ``dev`` (and ``eval``).
Hence, you need to specify one (or two) *probe files* for the development (and evaluation) set, with the default name ``for_probes.lst`` in the default sub-directory ``dev`` (and ``eval``).
They are 2-column files with format:
.. code-block:: text
filename client_id
* The other option is to specify a detailed list, which probe file should be be compared with which client model, i.e., one or two *score files* for the development (and evaluation) set, with default name ``for_scores.lst`` in the sub-directories ``dev`` (and ``eval``).
* The other option is to specify a detailed list of which probe files should be compared with which client model, i.e., one (or two) *score files* for the development (and evaluation) set, with the default name ``for_scores.lst`` in the sub-directory ``dev`` (and ``eval``).
These files need to be provided only if the scoring is to be done selectively, meaning by creating a sparse probe/model scoring matrix.
They are 4-column files with format:
......@@ -154,7 +154,7 @@ Legacy Database Connector
This *legacy database wrapper* is used to translate an old ``bob.db`` package functions into a bob pipelines database interface.
It uses :py:func:`~bob.db.base.objects` to retrieve a list of files for each roles (``world``, ``references`` and ``probes``) and specified groups (``dev`` and ``eval``), and creates the according :py:class:`Sample` and :py:class:`SampleSet` lists.
It uses :py:func:`~bob.db.base.objects` to retrieve a list of files for each role (``world``, ``references``, and ``probes``) and specified group (``dev`` and ``eval``) and creates the matching :py:class:`Sample` and :py:class:`SampleSet` lists.
This example shows the creation of the Mobio database interface in the bob.pipelines format from the legacy bob.db:
......
This diff is collapsed.
......@@ -13,13 +13,13 @@ Such CLI command is an entry-point to several pipelines implemented under :py:mo
This tutorial will focus on the pipeline called **vanilla-biometrics**.
In our very first example, we've shown how to compare two samples using the ``bob bio compare-samples`` command, where the "biometric" algorithm is set with the argument ``-p``.
The ``-p`` points to a so called :py:mod:`~bob.bio.base.pipelines.VanillaBiometricsPipeline`.
The ``-p`` points to a so-called :py:mod:`~bob.bio.base.pipelines.VanillaBiometricsPipeline`.
Running a biometric experiment with Vanilla Biometrics
------------------------------------------------------
A set of commands are available to run Vanilla Biometrics experiments from the shell. Those are on the form of::
A set of commands are available to run Vanilla Biometrics experiments from the shell. Those are in the form of::
$ bob bio pipelines vanilla-biometrics [OPTIONS] -p <pipeline>
......@@ -31,15 +31,15 @@ $ bob bio pipelines vanilla-biometrics --help
.. _bob.bio.base.build_pipelines:
Building you own Vanilla Biometrics pipeline
Building your own Vanilla Biometrics pipeline
--------------------------------------------
The Vanilla Biometrics represents **the simplest** biometrics pipeline possible and for this reason is the backbone for any biometric test in this library.
It's basically composed of:
The Vanilla Biometrics represents **the simplest** biometrics pipeline possible and for this reason, is the backbone for any biometric test in this library.
It's composed of:
:ref:`Transformers <bob.bio.base.transformer>`: Instances of :py:class:`sklearn.base.BaseEstimator` and :py:class:`sklearn.base.TransformerMixin`. A Transformer can be trained if needed, and applies one or several transformations on an input sample. It must implement the :py:meth:`~Transformer.transform` and a :py:meth:`~Transformer.fit` methods. Multiple transformers can be chained together, each working on the output of the previous one.
:ref:`Transformers <bob.bio.base.transformer>`: Instances of :py:class:`sklearn.base.BaseEstimator` and :py:class:`sklearn.base.TransformerMixin`. A Transformer can be trained if needed and applies one or several transformations on an input sample. It must implement a :py:meth:`~Transformer.transform` and a :py:meth:`~Transformer.fit` method. Multiple transformers can be chained together, each working on the output of the previous one.
A :ref:`Biometric Algorithm <bob.bio.base.biometric_algorithm>`: Instance of :py:class:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm` that implements the methods :py:meth:`enroll` and :py:meth:`score` to generate a biometric experiment results.
A :ref:`Biometric Algorithm <bob.bio.base.biometric_algorithm>`: Instance of :py:class:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm` that implements the methods :py:meth:`enroll` and :py:meth:`score` to generate biometric experiment results.
Running the vanilla-biometric pipeline will retrieve samples from a dataset and generate score files.
It does not encompass the analysis of those scores (Error rates, ROC, DET). This can be done with other utilities of the ``bob.bio`` packages.
......@@ -50,8 +50,8 @@ It does not encompass the analysis of those scores (Error rates, ROC, DET). This
Transformer
^^^^^^^^^^^
Following the structure of `pipelines of scikit-learn <https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html>`__, a Transformer is a class that must implement a :py:meth:`~Transformer.transform` and a :py:meth:`~Transformer.fit` methods.
This class represents a simple operation that can be applied on data, like preprocessing of a sample or extraction of a feature vector from data.
Following the structure of `pipelines of scikit-learn <https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html>`__, a Transformer is a class that must implement a :py:meth:`~Transformer.transform` and a :py:meth:`~Transformer.fit` method.
This class represents a simple operation that can be applied to data, like preprocessing of a sample or extraction of a feature vector from data.
A :py:class:`Transformer` must implement the following methods:
......@@ -94,23 +94,23 @@ Biometric Algorithm
A biometric algorithm represents the enrollment and scoring phase of a biometric experiment.
A biometric algorithm is a class implementing the methods :py:meth:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm.enroll` that allows to save the identity representation of a subject, and :py:meth:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm.score` that computes the score of a subject's sample against a previously enrolled model.
A biometric algorithm is a class implementing the method :py:meth:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm.enroll` that allows to save the identity representation of a subject, and :py:meth:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm.score` that computes the score of a subject's sample against a previously enrolled model.
A common example of a biometric algorithm class would compute the mean vector of the features of each enrolled subject, and the scoring would be done by measuring the distance between the unknown identity vector and the enrolled mean vector.
.. py:method:: BiometricAlgorithm.enroll(reference_sample)
The :py:meth:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm.enroll` method takes extracted features (data that went trough transformers) of the *reference* samples as input.
It should save (on memory or on disk) a representation of the identity of each subject for later comparison with the :py:meth:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm.score` method.
The :py:meth:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm.enroll` method takes extracted features (data that went through transformers) of the *reference* samples as input.
It should save (on memory or disk) a representation of the identity of each subject for later comparison with the :py:meth:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm.score` method.
.. py:method:: BiometricAlgorithm.score(model,probe_sample)
The :py:meth:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm.score` method also takes extracted features (data that went trough transformers) as input, but coming from the *probe* samples.
The :py:meth:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm.score` method also takes extracted features (data that went through transformers) as input but coming from the *probe* samples.
It should compare the probe sample to the model and output a similarity score.
Here is a simple example of a custom :py:class:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm` implementation that compute a model with the mean of multiple reference samples, and measures the inverse of the distance as similarity score.
Here is a simple example of a custom :py:class:`~bob.bio.base.pipelines.vanilla_biometrics.abstract_classes.BioAlgorithm` implementation that computes a model with the mean of multiple reference samples, and measures the inverse of the distance as a similarity score.
.. code-block:: python
......@@ -175,7 +175,7 @@ This will create a ``results`` folder with a ``scores-dev`` file in it containin
Minimal example of the vanilla-biometrics pipeline
--------------------------------------------------
Find below a complete file containing a Transformer, a Biometric Algorithm and the construction of the pipeline:
Find below a complete file containing a Transformer, a Biometric Algorithm, and the construction of the pipeline:
.. This raw html is used to create a "hidden" code block that can be revealed by clicking on its summary
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment