Skip to content
Snippets Groups Projects

removed unnecessary commands starting with ./bin/

Merged Alain KOMATY requested to merge hackathon2017 into master
1 file
+ 14
14
Compare changes
  • Side-by-side
  • Inline
+ 14
14
@@ -144,7 +144,7 @@ As you can see, most of the patches with high quality values overlap.
Using the Command line
======================
Finally, we have developed a script, namely ``./bin/detect_faces.py``, which integrates most of the above functionality.
Finally, we have developed a script, namely ``detect_faces.py``, which integrates most of the above functionality.
Given an image, the script will detect one or more faces in it, and display the bounding boxes around them.
When the script is run using default parameters, it will detect just the face in the image that comes with the highest confidence, as the result of :py:func:`detect_single_face` would do.
@@ -191,7 +191,7 @@ Training Data
=============
The first thing that the cascade training requires is training data -- the more the better.
To ease the collection of positive and negative training data, a script ``./bin/collect_training_data.py`` is provided.
To ease the collection of positive and negative training data, a script ``collect_training_data.py`` is provided.
This script has several options:
- ``--image-directory``: This directory is scanned for images with the given ``--image-extension``, and all found images are considered.
@@ -202,7 +202,7 @@ Positive data is defined by annotations of the images, which can be translated i
E.g., for frontal facial images, bounding boxes can be defined by the eye coordinates (see :py:func:`bounding_box_from_annotation`) or directly by specifying the top-left and bottom-right coordinate.
There are two different ways, how annotations can be read.
One way is to read annotations from annotation file using the :py:func:`read_annotation_file` function, which can read various types of annotations.
To use this function, simply specify the command line options for the ``./bin/collect_training_data.py`` script:
To use this function, simply specify the command line options for the ``collect_training_data.py`` script:
- ``--annotation-directory``: For each image in the ``--image-directory``, an annotation file with the given ``--annotation-extension`` needs to be available in this directory.
- ``--annotation-type``: The way how annotations are stored in the annotation files (see :py:func:`read_annotation_file`).
@@ -223,9 +223,9 @@ For example, to collect training data from three different databases, you could
.. code-block:: sh
$ ./bin/collect_training_data.py --image-directory <...>/Yale-B/data --image-extension .pgm --annotation-directory <...>/Yale-B/annotations --annotation-type named --output-file Yale-B.txt
$ ./bin/collect_training_data.py --database xm2vts --image-directory <...>/xm2vtsdb/images --protocols lp1 lp2 darkened-lp1 darkened-lp2 --groups world dev eval --output-file XM2VTS.txt
$ ./bin/collect_training_data.py --image-directory <...>/FDHD-background/data --image-extension .jpeg --no-annotations --output-file FDHD.txt
$ collect_training_data.py --image-directory <...>/Yale-B/data --image-extension .pgm --annotation-directory <...>/Yale-B/annotations --annotation-type named --output-file Yale-B.txt
$ collect_training_data.py --database xm2vts --image-directory <...>/xm2vtsdb/images --protocols lp1 lp2 darkened-lp1 darkened-lp2 --groups world dev eval --output-file XM2VTS.txt
$ collect_training_data.py --image-directory <...>/FDHD-background/data --image-extension .jpeg --no-annotations --output-file FDHD.txt
The first scans the ``Yale-B/data`` directory for ``.pgm`` images and the ``Yale-B/annotations`` directory for annotations of the ``named`` type, the second uses the ``bob.db.xm2vts`` interface to collect images, whereas the third collects only background ``.jpeg`` data from the ``FDHD-background/data`` directory.
@@ -233,7 +233,7 @@ Training Feature Extraction
===========================
Training the classifier is split into two steps.
First, the ``./bin/extract_training_features.py`` can be used to extracted training features from a list of database files as generated by the ``./bin/collect_training_data.py`` script.
First, the ``extract_training_features.py`` can be used to extracted training features from a list of database files as generated by the ``collect_training_data.py`` script.
Again, several options can be selected:
- ``--file-lists``: The file lists to process
@@ -280,21 +280,21 @@ For example, the pre-trained cascade uses the following options:
.. code-block:: sh
$ ./bin/extract_training_features.py --file-lists Yale-B.txt XM2VTS.txt FDHD.txt ... --lbp-scale 1 --lbp-variant mct
$ extract_training_features.py --file-lists Yale-B.txt XM2VTS.txt FDHD.txt ... --lbp-scale 1 --lbp-variant mct
Finally, there ``--parallel`` option can be used to run the feature extraction in parallel.
Particularly, in combination with the `GridTK <https://pypi.python.org/pypi/gridtk>`_, processing can be speed up tremendously:
.. code-block:: sh
$ ./bin/jman submit --parallel 64 -- ./bin/extract_training_features.py ... --parallel 64
$ jman submit --parallel 64 -- `which extract_training_features.py` ... --parallel 64
Cascade Training
================
To finally train the face detector cascade, the ``./bin/train_detector.py`` script is provided.
This script reads the training features as extracted by the ``./bin/extract_training_features.py`` script and generates a regular boosted cascade of weak classifiers.
To finally train the face detector cascade, the ``train_detector.py`` script is provided.
This script reads the training features as extracted by the ``extract_training_features.py`` script and generates a regular boosted cascade of weak classifiers.
Again, the script has several options:
- ``--feature-directory``: Reads all features from the given directory.
@@ -318,7 +318,7 @@ These numbers can be changed using the options:
- ``--classifiers-per-round``: The number of classifiers for each cascade step.
- ``--cascade-threshold``: The threshold, below which patches should be rejected (the same threshold for each cascade step).
This package also provides a script ``./bin/validate_cascade.py`` to automatically adapt the steps and thresholds of the cascade based on a validation set.
This package also provides a script ``validate_cascade.py`` to automatically adapt the steps and thresholds of the cascade based on a validation set.
However, but the use of this script is not encouraged since I couldn't yet come up if a proper default configuration.
The Shipped Cascade
@@ -341,10 +341,10 @@ Feature extraction was performed using a single scale MCT, as:
.. code-block:: sh
$ ./bin/extract_training_features.py -vv --lbp-scale 1 --lbp-variant mct --negative-examples-every 1 --filelists [ALL of ABOVE]
$ extract_training_features.py -vv --lbp-scale 1 --lbp-variant mct --negative-examples-every 1 --filelists [ALL of ABOVE]
Finally, the cascade training used default parameters:
.. code-block:: sh
$ ./bin/extract_training_features.py -vv
$ extract_training_features.py -vv
Loading