Skip to content
Snippets Groups Projects
Commit 232f4a21 authored by André Anjos's avatar André Anjos :speech_balloon:
Browse files

Merge branch 'issue-17' into 'master'

Using the new bob.bio API

Closes #17

See merge request !43
parents 0693a692 93c487b6
No related branches found
No related tags found
1 merge request!43Using the new bob.bio API
Pipeline #
...@@ -147,11 +147,18 @@ performance: ...@@ -147,11 +147,18 @@ performance:
.. code-block:: sh .. code-block:: sh
$ bob_eval_threshold.py <path-to>/verafinger/rlt/Nom/nonorm/scores-dev $ bob bio metrics <path-to>/verafinger/rlt/Nom/nonorm/scores-dev --no-evaluation
('Threshold:', 0.31835292) [Min. criterion: EER ] Threshold on Development set `scores-dev`: 0.31835292
FAR : 23.636% (11388/48180) ====== ========================
FRR : 23.636% (52/220) None Development scores-dev
HTER: 23.636% ====== ========================
FtA 0.0%
FMR 23.6% (11388/48180)
FNMR 23.6% (52/220)
FAR 23.6%
FRR 23.6%
HTER 23.6%
====== ========================
Maximum Curvature with Miura Matching Maximum Curvature with Miura Matching
...@@ -180,11 +187,19 @@ we obtained: ...@@ -180,11 +187,19 @@ we obtained:
.. code-block:: sh .. code-block:: sh
$ bob_eval_threshold.py <path-to>/verafinger/mc/Nom/nonorm/scores-dev $ bob bio metrics <path-to>/verafinger/mc/Nom/nonorm/scores-dev --no-evaluation
('Threshold:', 0.0737283) [Min. criterion: EER ] Threshold on Development set `scores-dev`: 7.372830e-02
FAR : 4.388% (2114/48180) ====== ========================
FRR : 4.545% (10/220) None Development scores-dev
HTER: 4.467% ====== ========================
FtA 0.0%
FMR 4.4% (2116/48180)
FNMR 4.5% (10/220)
FAR 4.4%
FRR 4.5%
HTER 4.5%
====== ========================
Wide Line Detector with Miura Matching Wide Line Detector with Miura Matching
====================================== ======================================
...@@ -192,7 +207,7 @@ Wide Line Detector with Miura Matching ...@@ -192,7 +207,7 @@ Wide Line Detector with Miura Matching
You can find the description of this method on the paper from Huang *et al.* You can find the description of this method on the paper from Huang *et al.*
[HDLTL10]_. [HDLTL10]_.
To run the baseline on the `VERA fingervein`_ database, using the ``NOM`` To run the baseline on the `VERA fingervein`_ database, using the ``Nom``
protocol like above, do the following: protocol like above, do the following:
...@@ -213,11 +228,17 @@ we obtained: ...@@ -213,11 +228,17 @@ we obtained:
.. code-block:: sh .. code-block:: sh
$ bob_eval_threshold.py <path-to>/verafinger/wld/NOM/nonorm/scores-dev $ bob bio metrics <path-to>/verafinger/wld/Nom/nonorm/scores-dev --no-evaluation
('Threshold:', 0.240269475) [Min. criterion: EER ] Threshold on Development set `scores-dev`: 2.402707e-01
FAR : 9.770% (4707/48180) ====== ========================
FRR : 9.545% (21/220) None Development scores-dev
HTER: 9.658% ====== ========================
FtA 0.0%
FMR 9.8% (4726/48180)
FNMR 10.0% (22/220)
FAR 9.8%
FRR 10.0%
HTER 9.9%
Results for other Baselines Results for other Baselines
...@@ -233,7 +254,7 @@ correspond to the the equal-error rate on the development set, in percentage): ...@@ -233,7 +254,7 @@ correspond to the the equal-error rate on the development set, in percentage):
Feature Extractor Full B Nom 1vsall nom Feature Extractor Full B Nom 1vsall nom
======================== ====== ====== ====== ====== ====== ======================== ====== ====== ====== ====== ======
Repeated Line Tracking 14.6 13.4 23.6 3.4 1.4 Repeated Line Tracking 14.6 13.4 23.6 3.4 1.4
Wide Line Detector 5.8 5.6 9.7 2.8 1.9 Wide Line Detector 5.8 5.6 9.9 2.8 1.9
Maximum Curvature 2.5 1.4 4.5 0.9 0.4 Maximum Curvature 2.5 1.4 4.5 0.9 0.4
======================== ====== ====== ====== ====== ====== ======================== ====== ====== ====== ====== ======
...@@ -347,11 +368,11 @@ When used to run an experiment, ...@@ -347,11 +368,11 @@ When used to run an experiment,
:py:class:`bob.bio.vein.preprocessor.WatershedMask` requires you provide a :py:class:`bob.bio.vein.preprocessor.WatershedMask` requires you provide a
*pre-trained* neural network model that presets the markers before *pre-trained* neural network model that presets the markers before
watershedding takes place. In order to create one, you can run the program watershedding takes place. In order to create one, you can run the program
`markdet.py`: `bob_bio_vein_markdet.py`:
.. code-block:: sh .. code-block:: sh
$ markdet.py --hidden=20 --samples=500 fv3d central dev $ bob_bio_vein_markdet.py --hidden=20 --samples=500 fv3d central dev
You input, as arguments to this application, the database, protocol and subset You input, as arguments to this application, the database, protocol and subset
name you wish to use for training the network. The data is loaded observing a name you wish to use for training the network. The data is loaded observing a
...@@ -367,9 +388,9 @@ Region of Interest Goodness of Fit ...@@ -367,9 +388,9 @@ Region of Interest Goodness of Fit
================================== ==================================
Automatic region of interest (RoI) finding and cropping can be evaluated using Automatic region of interest (RoI) finding and cropping can be evaluated using
a couple of scripts available in this package. The program ``compare_rois.py`` a couple of scripts available in this package. The program
compares two sets of ``preprocessed`` images and masks, generated by ``bob_bio_vein_compare_rois.py`` compares two sets of ``preprocessed`` images
*different* preprocessors (see and masks, generated by *different* preprocessors (see
:py:class:`bob.bio.base.preprocessor.Preprocessor`) and calculates a few :py:class:`bob.bio.base.preprocessor.Preprocessor`) and calculates a few
metrics to help you determine how both techniques compare. Normally, the metrics to help you determine how both techniques compare. Normally, the
program is used to compare the result of automatic RoI to manually annoted program is used to compare the result of automatic RoI to manually annoted
...@@ -379,7 +400,7 @@ extracted ones. E.g.: ...@@ -379,7 +400,7 @@ extracted ones. E.g.:
.. code-block:: sh .. code-block:: sh
$ compare_rois.py ~/verafinger/mc_annot/preprocessed ~/verafinger/mc/preprocessed $ bob_bio_vein_compare_rois.py ~/verafinger/mc_annot/preprocessed ~/verafinger/mc/preprocessed
Jaccard index: 9.60e-01 +- 5.98e-02 Jaccard index: 9.60e-01 +- 5.98e-02
Intersection ratio (m1): 9.79e-01 +- 5.81e-02 Intersection ratio (m1): 9.79e-01 +- 5.81e-02
Intersection ratio of complement (m2): 1.96e-02 +- 1.53e-02 Intersection ratio of complement (m2): 1.96e-02 +- 1.53e-02
...@@ -399,12 +420,12 @@ metrics. ...@@ -399,12 +420,12 @@ metrics.
Pipeline Display Pipeline Display
================ ================
You can use the program ``view_sample.py`` to display the images after You can use the program ``bob_bio_vein_view_sample.py`` to display the images
full processing using: after full processing using:
.. code-block:: sh .. code-block:: sh
$ ./bin/view_sample.py --save=output-dir verafinger /path/to/processed/directory 030-M/030_L_1 $ bob_bio_vein_view_sample.py --save=output-dir verafinger /path/to/processed/directory 030-M/030_L_1
$ # open output-dir $ # open output-dir
And you should be able to view images like these (example taken from the Vera And you should be able to view images like these (example taken from the Vera
...@@ -415,7 +436,7 @@ feature extractor): ...@@ -415,7 +436,7 @@ feature extractor):
:scale: 50% :scale: 50%
Example RoI overlayed on finger vein image of the Vera fingervein database, Example RoI overlayed on finger vein image of the Vera fingervein database,
as produced by the script ``view_sample.py``. as produced by the script ``bob_bio_vein_view_sample.py``.
.. figure:: img/binarized.* .. figure:: img/binarized.*
......
...@@ -50,11 +50,11 @@ setup( ...@@ -50,11 +50,11 @@ setup(
], ],
'console_scripts': [ 'console_scripts': [
'compare_rois.py = bob.bio.vein.script.compare_rois:main', 'bob_bio_vein_compare_rois.py = bob.bio.vein.script.compare_rois:main',
'view_sample.py = bob.bio.vein.script.view_sample:main', 'bob_bio_vein_view_sample.py = bob.bio.vein.script.view_sample:main',
'blame.py = bob.bio.vein.script.blame:main', 'bob_bio_vein_blame.py = bob.bio.vein.script.blame:main',
'markdet.py = bob.bio.vein.script.markdet:main', 'bob_bio_vein_markdet.py = bob.bio.vein.script.markdet:main',
'watershed_mask.py = bob.bio.vein.script.watershed_mask:main', 'bob_bio_vein_watershed_mask.py = bob.bio.vein.script.watershed_mask:main',
] ]
}, },
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment