Commit 26689f96 authored by André Anjos's avatar André Anjos 💬

[doc] Add max. curvature baseline

parent 5c9171c3
Pipeline #2889 passed with stage
in 3 minutes and 46 seconds
......@@ -116,26 +116,77 @@ toolchain:
As the tool runs, you'll see printouts that show how it advances through
preprocessing, feature extraction and matching. To complete the evaluation,
run the commands bellow, that will output the equal error rate (EER) and plot
the detector error trade-off (DET) curve with the performance:
preprocessing, feature extraction and matching. In a 4-core machine and using
4 parallel tasks, it takes as around 2 hours to process this baseline with the
current code implementation.
To complete the evaluation, run the commands bellow, that will output the equal
error rate (EER) and plot the detector error trade-off (DET) curve with the
performance:
.. code-block:: sh
$ ./bin/bob_eval_threshold.py --scores <path-to>/vera/rlt/NOM/nonorm/scores-dev --criterium=eer
('Threshold:', 0.32023322499999995)
FAR : 24.318% (46866/192720)
FRR : 24.318% (107/440)
HTER: 24.318%
$ ./bin/evaluate.py --dev-files <path-to>/vera/rlt/NOM/nonorm/scores-dev --det det.pdf -l "vera-nom-mnm04" -rr
The Recognition Rate of the development set of 'rlt' is 48.409%
('Threshold:', 0.320748535)
FAR : 26.478% (12757/48180)
FRR : 26.364% (58/220)
HTER: 26.421%
Maximum Curvature with Miura Matching
=====================================
You can find the description of this method on the paper from Miura *et al.*
[MNM05]_.
To run the baseline on the `VERA fingervein`_ database, using the ``NOM``
protocol (called ``Full`` in [TVM14]_), do the following:
To view the DET curve stored in the output file, do the following (on a Linux
machine):
.. code-block:: sh
$ xdg-open det.pdf #to view the DET curve
$ ./bin/verify.py --database=vera --protocol=NOM --preprocessor=nopp --extractor=maximumcurvature --algorithm=match-mc --sub-directory="mc" --verbose --verbose
.. tip::
If you have more processing cores on your local machine and don't want to
submit your job for SGE execution, you can run it in parallel (using 4
parallel tasks) by adding the options ``--parallel=4 --nice=10``.
This command line selects and runs the following implementations for the
toolchain:
* Database: Use the base Bob API for the VERA database implementation,
protocol variant ``NOM`` which corresponds to the ``Full`` evaluation
protocol described in [TVM14]_
* Preprocessor: Simple finger cropping, with no extra post-processing, as
defined in [LLP09]_
* Feature extractor: Repeated line tracking, as explained in [MNM04]_
* Matching algorithm: "Miura" matching, as explained on the same paper
* Subdirectory: This is the subdirectory in which the scores and intermediate
results of this baseline will be stored.
As the tool runs, you'll see printouts that show how it advances through
preprocessing, feature extraction and matching. In a 4-core machine and using
4 parallel tasks, it takes as around 2 hours to process this baseline with the
current code implementation.
To complete the evaluation, run the commands bellow, that will output the equal
error rate (EER) and plot the detector error trade-off (DET) curve with the
performance:
.. code-block:: sh
$ ./bin/bob_eval_threshold.py --scores <path-to>/vera/rlt/NOM/nonorm/scores-dev --criterium=eer
('Threshold:', 0.320748535)
FAR : 26.478% (12757/48180)
FRR : 26.364% (58/220)
HTER: 26.421%
Available Resources
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment