Commit b23c6868 authored by akomaty@idiap.ch's avatar akomaty@idiap.ch
Browse files

removed unnecessary commands starting with ./bin/

parent 9dbcc4e2
Pipeline #8929 passed with stages
in 27 minutes and 7 seconds
......@@ -103,7 +103,7 @@ def command_line_options(command_line_parameters):
help = 'Use the given variable instead of the "replace" keyword in the configuration file')
parser.add_argument('parameters', nargs = argparse.REMAINDER,
help = "Parameters directly passed to the verify.py script. Use -- to separate this parameters from the parameters of this script. See './bin/verify.py --help' for a complete list of options.")
help = "Parameters directly passed to the verify.py script. Use -- to separate this parameters from the parameters of this script. See 'verify.py --help' for a complete list of options.")
bob.core.log.add_command_line_option(parser)
......
#!/bin/python
# This file describes an exemplary configuration file that can be used in combination with the bin/parameter_test.py script.
# This file describes an exemplary configuration file that can be used in combination with the parameter_test.py script.
# The preprocessor uses two fake parameters, which are called #1 and #4
......
......@@ -500,7 +500,7 @@ def test_grid_search():
def test_scripts():
# Tests the bin/preprocess.py, bin/extract.py, bin/enroll.py and bin/score.py scripts
# Tests the preprocess.py, extract.py, enroll.py and score.py scripts
test_dir = tempfile.mkdtemp(prefix='bobtest_')
data_file = os.path.join(test_dir, "data.hdf5")
annotation_file = os.path.join(test_dir, "annotatations.txt")
......
......@@ -130,7 +130,7 @@ def command_line_parser(description=__doc__, exclude_resources_from=[]):
flag_group.add_argument('-B', '--timer', choices=('real', 'system', 'user'), nargs = '*',
help = 'Measure and report the time required by the execution of the tool chain (only on local machine)')
flag_group.add_argument('-L', '--run-local-scheduler', action='store_true',
help = 'Starts the local scheduler after submitting the jobs to the local queue (by default, local jobs must be started by hand, e.g., using ./bin/jman --local -vv run-scheduler -x)')
help = 'Starts the local scheduler after submitting the jobs to the local queue (by default, local jobs must be started by hand, e.g., using jman --local -vv run-scheduler -x)')
flag_group.add_argument('-N', '--nice', type=int, default=10,
help = 'Runs the local scheduler with the given nice value')
flag_group.add_argument('-D', '--delete-jobs-finished-with-status', choices = ('all', 'failure', 'success'),
......@@ -429,7 +429,7 @@ def write_info(args, command_line_parameters, executable):
If ``None``, the parameters specified by the user on command line are considered.
executable : str
The name of the executable (such as ``'./bin/verify.py'``) that is used to run the experiments.
The name of the executable (such as ``'verify.py'``) that is used to run the experiments.
"""
if command_line_parameters is None:
command_line_parameters = sys.argv[1:]
......
......@@ -49,19 +49,19 @@ In these cases, the according steps are skipped.
Running Experiments (part I)
----------------------------
To run an experiment, we provide a generic script ``./bin/verify.py``, which is highly parametrizable.
To run an experiment, we provide a generic script ``verify.py``, which is highly parametrizable.
To get a complete list of command line options, please run:
.. code-block:: sh
$ ./bin/verify.py --help
$ verify.py --help
Whoops, that's a lot of options.
But, no worries, most of them have proper default values.
.. note::
Sometimes, command line options have a long version starting with ``--`` and a short one starting with a single ``-``.
In this section, only the long names of the arguments are listed, please refer to ``./bin/verify.py --help`` (or short: ``./bin/faceverify.py -h``) for the abbreviations.
In this section, only the long names of the arguments are listed, please refer to ``verify.py --help`` (or short: ``faceverify.py -h``) for the abbreviations.
There are five command line options, which are required and sufficient to define the complete biometric recognition experiment.
These five options are:
......@@ -79,7 +79,7 @@ To get a list of registered resources, please call:
.. code-block:: sh
$ ./bin/resources.py
$ resources.py
Each package in ``bob.bio`` defines its own resources, and the printed list of registered resources differs according to the installed packages.
If only ``bob.bio.base`` is installed, no databases and only one preprocessor will be listed.
......@@ -87,7 +87,7 @@ To see more details about the resources, i.e., the full constructor call fo the
.. code-block:: sh
$ ./bin/resources.py -dt algorithm
$ resources.py -dt algorithm
.. note::
......@@ -109,7 +109,7 @@ So, a typical biometric recognition experiment (in this case, face recognition)
.. code-block:: sh
$ ./bin/verify.py --database mobio-image --preprocessor face-crop-eyes --extractor linearize --algorithm pca --sub-directory pca-experiment -vv
$ verify.py --database mobio-image --preprocessor face-crop-eyes --extractor linearize --algorithm pca --sub-directory pca-experiment -vv
.. note::
To be able to run exactly the command line from above, it requires to have :ref:`bob.bio.face <bob.bio.face>` installed.
......@@ -121,7 +121,7 @@ Usually, they will be called something like ``scores-dev``.
By default, you can find them in a sub-directory the ``result`` directory, but you can change this option using the ``--result-directory`` command line option.
.. note::
At Idiap_, the default result directory differs, see ``./bin/verify.py --help`` for your directory.
At Idiap_, the default result directory differs, see ``verify.py --help`` for your directory.
.. _bob.bio.base.evaluate:
......@@ -131,14 +131,14 @@ Evaluating Experiments
After the experiment has finished successfully, one or more text file containing all the scores are written.
To evaluate the experiment, you can use the generic ``./bin/evaluate.py`` script, which has properties for all prevalent evaluation types, such as CMC, ROC and DET plots, as well as computing recognition rates, EER/HTER, Cllr and minDCF.
To evaluate the experiment, you can use the generic ``evaluate.py`` script, which has properties for all prevalent evaluation types, such as CMC, ROC and DET plots, as well as computing recognition rates, EER/HTER, Cllr and minDCF.
Additionally, a combination of different algorithms can be plotted into the same files.
Just specify all the score files that you want to evaluate using the ``--dev-files`` option, and possible legends for the plots (in the same order) using the ``--legends`` option, and the according plots will be generated.
For example, to create a ROC curve for the experiment above, use:
.. code-block:: sh
$ ./bin/evaluate.py --dev-files results/pca-experiment/male/nonorm/scores-dev --legend MOBIO --roc MOBIO_MALE_ROC.pdf -vv
$ evaluate.py --dev-files results/pca-experiment/male/nonorm/scores-dev --legend MOBIO --roc MOBIO_MALE_ROC.pdf -vv
Please note that there exists another file called ``Experiment.info`` inside the result directory.
This file is a pure text file and contains the complete configuration of the experiment.
......@@ -150,24 +150,24 @@ With this configuration it is possible to inspect all default parameters of the
Running in Parallel
-------------------
One important property of the ``./bin/verify.py`` script is that it can run in parallel, using either several threads on the local machine, or an SGE grid.
One important property of the ``verify.py`` script is that it can run in parallel, using either several threads on the local machine, or an SGE grid.
To achieve that, ``bob.bio`` is well-integrated with our SGE grid toolkit GridTK_, which we have selected as a python package in the :ref:`Installation <bob.bio.base.installation>` section.
The ``./bin/verify.py`` script can submit jobs either to the SGE grid, or to a local scheduler, keeping track of dependencies between the jobs.
The ``verify.py`` script can submit jobs either to the SGE grid, or to a local scheduler, keeping track of dependencies between the jobs.
The GridTK_ keeps a list of jobs in a local database, which by default is called ``submitted.sql3``, but which can be overwritten with the ``--gridtk-database-file`` option.
Please refer to the `GridTK documentation <http://pythonhosted.org/gridtk>`_ for more details on how to use the Job Manager ``./bin/jman``.
Please refer to the `GridTK documentation <http://pythonhosted.org/gridtk>`_ for more details on how to use the Job Manager ``jman``.
Two different types of ``grid`` resources are defined, which can be used with the ``--grid`` command line option.
The first type of resources will submit jobs to an SGE grid.
They are mainly designed to run in the Idiap_ SGE grid and might need some adaptations to run on your grid.
The second type of resources will submit jobs to a local queue, which needs to be run by hand (e.g., using ``./bin/jman --local run-scheduler --parallel 4``), or by using the command line option ``--run-local-scheduler``.
The second type of resources will submit jobs to a local queue, which needs to be run by hand (e.g., using ``jman --local run-scheduler --parallel 4``), or by using the command line option ``--run-local-scheduler``.
The difference between the two types of resources is that the local submission usually starts with ``local-``, while the SGE resource does not.
Hence, to run the same experiment as above using four parallel threads on the local machine, re-nicing the jobs to level 10, simply call:
.. code-block:: sh
$ ./bin/verify.py --database mobio-image --preprocessor face-crop-eyes --extractor linearize --algorithm pca --sub-directory pca-experiment -vv --grid local-p4 --run-local-scheduler --nice 10
$ verify.py --database mobio-image --preprocessor face-crop-eyes --extractor linearize --algorithm pca --sub-directory pca-experiment -vv --grid local-p4 --run-local-scheduler --nice 10
.. note::
You might realize that the second execution of the same experiment is much faster than the first one.
......
......@@ -325,12 +325,12 @@ Particularly, we use a specific list of entry points, which are:
* ``bob.bio.config`` to register a Python module that contains the values of
resources and parameters to use for an experiment
For each of the tools, several resources are defined, which you can list with the ``./bin/resources.py`` command line.
For each of the tools, several resources are defined, which you can list with the ``resources.py`` command line.
When you want to register your own resource, make sure that your configuration file is importable (usually it is sufficient to have an empty ``__init__.py`` file in the same directory as your configuration file).
Then, you can simply add a line inside the according ``entry_points`` section of the ``setup.py`` file (you might need to create that section, just follow the example of the ``setup.py`` file that you can find online in the base directory of our `bob.bio.base Gitlab page <http://gitlab.idiap.ch/bob/bob.bio.base>`__).
After re-running ``./bin/buildout``, your new resource should be listed in the output of ``./bin/resources.py``.
After re-running ``buildout``, your new resource should be listed in the output of ``resources.py``.
.. include:: links.rst
......@@ -51,10 +51,9 @@ Running the simple command line:
.. code-block:: sh
$ python bootstrap-buildout.py
$ ./bin/buildout
$ buildout
will the download and install all dependent packages locally (relative to your current working directory), and create a ``./bin`` directory containing all the necessary scripts to run the experiments.
will the download and install all dependent packages locally (relative to your current working directory), and add them to your environment.
Databases
......@@ -77,7 +76,7 @@ By default, this file is located in ``~/.bob_bio_databases.txt``, and it contain
If this file does not exist, feel free to create and populate it yourself.
Please use ``./bin/databases.py`` for a list of known databases, where you can see the raw ``[YOUR_DATABASE_PATH]`` entries for all databases that you haven't updated, and the corrected paths for those you have.
Please use ``databases.py`` for a list of known databases, where you can see the raw ``[YOUR_DATABASE_PATH]`` entries for all databases that you haven't updated, and the corrected paths for those you have.
.. note::
......@@ -92,8 +91,8 @@ To verify your installation, you should run the script running the nose tests fo
.. code-block:: sh
$ ./bin/nosetests -vs bob.bio.base
$ ./bin/nosetests -vs bob.bio.gmm
$ nosetests -vs bob.bio.base
$ nosetests -vs bob.bio.gmm
...
Some of the tests that are run require the images of the `AT&T database`_ database.
......@@ -129,7 +128,7 @@ However, to generate this documentation locally, you call:
.. code-block:: sh
$ ./bin/sphinx-build doc sphinx
$ sphinx-build doc sphinx
Afterward, the documentation is available and you can read it, e.g., by using:
......
......@@ -13,29 +13,29 @@ Now that we have learned the implementation details, we can have a closer look i
Running Experiments (part II)
-----------------------------
As mentioned before, running biometric recognition experiments can be achieved using the ``./bin/verify.py`` command line.
As mentioned before, running biometric recognition experiments can be achieved using the ``verify.py`` command line.
In section :ref:`running_part_1`, we have used registered resources to run an experiment.
However, the command line options of ``./bin/verify.py`` is more flexible, as you can have three different ways of defining tools:
However, the command line options of ``verify.py`` is more flexible, as you can have three different ways of defining tools:
1. Choose a resource (see ``./bin/resources.py`` or ``./bin/verify.py --help`` for a list of registered resources):
1. Choose a resource (see ``resources.py`` or ``verify.py --help`` for a list of registered resources):
.. code-block:: sh
$ ./bin/verify.py --algorithm pca
$ verify.py --algorithm pca
2. Use a configuration file. Make sure that your configuration file has the correct variable name:
.. code-block:: sh
$ ./bin/verify.py --algorithm bob/bio/base/config/algorithm/pca.py
$ verify.py --algorithm bob/bio/base/config/algorithm/pca.py
3. Instantiate a class on the command line. Usually, quotes ``"..."`` are required, and the ``--imports`` need to be specified:
.. code-block:: sh
$ ./bin/verify.py --algorithm "bob.bio.base.algorithm.PCA(subspace_dimension = 30, distance_function = scipy.spatial.distance.euclidean, is_distance_function = True)" --imports bob.bio.base scipy.spatial
$ verify.py --algorithm "bob.bio.base.algorithm.PCA(subspace_dimension = 30, distance_function = scipy.spatial.distance.euclidean, is_distance_function = True)" --imports bob.bio.base scipy.spatial
All these three ways can be used for any of the five command line options: ``--database``, ``--preprocessor``, ``--extractor``, ``--algorithm`` and ``--grid``.
You can even mix these three types freely in a single command line.
......@@ -46,11 +46,11 @@ Score Level Fusion of Different Algorithms on the same Database
In several of our publications, we have shown that the combination of several biometric recognition algorithms is able to outperform each single algorithm.
This is particularly true, when the algorithms rely on different kind of data, e.g., we have `fused face and speaker recognition system on the MOBIO database <http://publications.idiap.ch/index.php/publications/show/2688>`__.
As long as several algorithms are executed on the same database, we can simply generate a fusion system by using the ``./bin/fuse_scores.py`` script, generating a new score file:
As long as several algorithms are executed on the same database, we can simply generate a fusion system by using the ``fuse_scores.py`` script, generating a new score file:
.. code-block:: sh
$ ./bin/fuse_scores.py --dev
$ fuse_scores.py --dev
This computation is based on the :py:class:`bob.learn.linear.CGLogRegTrainer`, which is trained on the scores of the development set files (``--dev-files``) for the given systems.
Afterwards, the fusion is applied to the ``--dev-files`` and the resulting score file is written to the file specified by ``--fused-dev-file``.
......@@ -59,7 +59,7 @@ If ``--eval-files`` are specified, the same fusion that is trained on the develo
.. note::
When ``--eval-files`` are specified, they need to be in the same order as the ``dev-files``, otherwise the result is undefined.
The resulting ``--fused-dev-file`` and ``fused-eval-file`` can then be evaluated normally, e.g., using the ``./bin/evaluate.py`` script.
The resulting ``--fused-dev-file`` and ``fused-eval-file`` can then be evaluated normally, e.g., using the ``evaluate.py`` script.
.. _grid-search:
......@@ -70,13 +70,13 @@ Finding the Optimal Configuration
Sometimes, configurations of tools (preprocessors, extractors or algorithms) are highly dependent on the database or even the employed protocol.
Additionally, configuration parameters depend on each other.
``bob.bio`` provides a relatively simple set up that allows to test different configurations in the same task, and find out the best set of configurations.
For this, the ``./bin/grid_search.py`` script can be employed.
For this, the ``grid_search.py`` script can be employed.
This script executes a configurable series of experiments, which reuse data as far as possible.
Please check out ``./bin/grid_search.py --help`` for a list of command line options.
Please check out ``grid_search.py --help`` for a list of command line options.
The Configuration File
~~~~~~~~~~~~~~~~~~~~~~
The most important parameter to the ``./bin/grid_search.py`` is the ``--configuration-file``.
The most important parameter to the ``grid_search.py`` is the ``--configuration-file``.
In this configuration file it is specified, which parameters of which part of the algorithms will be tested.
An example for a configuration file can be found in the test scripts: ``bob/bio/base/test/dummy/grid_search.py``.
The configuration file is a common python file, which can contain certain variables:
......@@ -91,7 +91,7 @@ The configuration file is a common python file, which can contain certain variab
The variables from 1. to 3. usually contain instantiations for classes of :ref:`bob.bio.base.preprocessors`, :ref:`bob.bio.base.extractors` and :ref:`bob.bio.base.algorithms`, but also registered :ref:`bob.bio.base.resources` can be used.
For any of the parameters of the classes, a *placeholder* can be put.
By default, these place holders start with a # character, followed by a digit or character.
The variables 1. to 3. can also be overridden by the command line options ``--preprocessor``, ``--extractor`` and ``--algorithm`` of the ``./bin/grid_search.py`` script.
The variables 1. to 3. can also be overridden by the command line options ``--preprocessor``, ``--extractor`` and ``--algorithm`` of the ``grid_search.py`` script.
The ``replace`` variable has to be set as a dictionary.
In it, you can define with which values your place holder key should be filled, and in which step of the tool chain execution this should happen.
......@@ -167,10 +167,10 @@ If you, e.g., test, which ``scipy.spatial`` distance function works best for you
Further Command Line Options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``./bin/grid_search.py`` script has a further set of command line options.
The ``grid_search.py`` script has a further set of command line options.
- The ``--database`` and the ``--protocol`` define, which database and (optionally) which protocol should be used.
- The ``--sub-directory`` is similar to the one in the ``./bin/verify.py``.
- The ``--sub-directory`` is similar to the one in the ``verify.py``.
- ``--result-directory`` and ``--temp-directory`` specify directories to write results and temporary files into. Defaults are ``./results/grid_search`` and ``./temp/grid_search`` in the current directory. Make sure that the ``--temp-directory`` can store sufficient amount of data.
- The ``--preprocessor``, ``--extractor`` and ``--algorithm`` can be used to override the ``preprocessor``, ``extractor`` and ``algorithm`` fields in the configuration file (in which case the configuration file does not need to contain these variables).
- The ``--grid`` option can select the SGE_ configuration.
......@@ -182,7 +182,7 @@ The ``./bin/grid_search.py`` script has a further set of command line options.
- The ``--dry-run`` flag should always be used before the final execution to see if the experiment definition works as expected.
- The ``--skip-when-existent`` flag will only execute the experiments that have not yet finished (i.e., where the resulting score files are not produced yet).
- With the ``--executable`` flag, you might select a different script rather that ``bob.bio.base.script.verify`` to run the experiments (such as the ``bob.bio.gmm.script.verify_gmm``).
- Finally, additional options might be sent to the ``./bin/verify.py`` script directly. These options might be put after a ``--`` separation.
- Finally, additional options might be sent to the ``verify.py`` script directly. These options might be put after a ``--`` separation.
Evaluation of Results
......@@ -193,14 +193,14 @@ Simply call:
.. code-block:: sh
$ ./bin/collect_results.py -vv --directory [result-base-directory] --sort
$ collect_results.py -vv --directory [result-base-directory] --sort
This will iterate through all result files found in ``[result-base-directory]`` and sort the results according to the EER on the development set (the sorting criterion can be modified using the ``--criterion`` and the ``--sort-key`` comamnd line options).
Hence, to find the best results of your grid search experiments (with default directories), simply run:
.. code-block:: sh
$ ./bin/collect_results.py -vv --directory results/grid_search --sort --criterion EER --sort-key nonorm-dev
$ collect_results.py -vv --directory results/grid_search --sort --criterion EER --sort-key nonorm-dev
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment