Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
bob.bio.base
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
bob
bob.bio.base
Commits
dcc72296
There was a problem fetching the pipeline summary.
Commit
dcc72296
authored
6 years ago
by
Theophile GENTILHOMME
Browse files
Options
Downloads
Patches
Plain Diff
Add evaluate command
parent
73a7b02e
No related branches found
Branches containing commit
No related tags found
Tags containing commit
2 merge requests
!146
Add 4-5-col files related functionalities and add click commands
,
!143
Set of click commands for bio base
Pipeline
#
Changes
3
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
bob/bio/base/script/commands.py
+177
-3
177 additions, 3 deletions
bob/bio/base/script/commands.py
bob/bio/base/test/test_commands.py
+64
-0
64 additions, 0 deletions
bob/bio/base/test/test_commands.py
setup.py
+2
-2
2 additions, 2 deletions
setup.py
with
243 additions
and
5 deletions
bob/bio/base/script/commands.py
+
177
−
3
View file @
dcc72296
...
...
@@ -23,7 +23,7 @@ FUNC_CMC = lambda x: load.load_files(x, load.cmc)
@common_options.far_option
()
@verbosity_option
()
@click.pass_context
def
metrics
(
ctx
,
scores
,
criter
,
test
,
**
kargs
):
def
metrics
(
ctx
,
scores
,
test
,
**
kargs
):
"""
Prints a single output line that contains all info for a given
criterion (eer or hter).
...
...
@@ -44,7 +44,7 @@ def metrics(ctx, scores, criter, test, **kargs):
$ bob bio metrics --test {dev,test}-scores1 {dev,test}-scores2
"""
if
criter
==
'
rr
'
:
if
'
criter
'
in
ctx
.
meta
and
ctx
.
meta
[
'
criter
'
]
==
'
rr
'
:
process
=
bio_figure
.
Metrics
(
ctx
,
scores
,
test
,
FUNC_CMC
)
else
:
process
=
bio_figure
.
Metrics
(
ctx
,
scores
,
test
,
FUNC_SPLIT
)
...
...
@@ -189,7 +189,6 @@ def hist(ctx, scores, test, **kwargs):
@common_options.axes_val_option
(
dflt
=
None
)
@common_options.axis_fontsize_option
()
@common_options.x_rotation_option
()
@common_options.fmr_line_at_option
()
@verbosity_option
()
@click.pass_context
def
cmc
(
ctx
,
scores
,
test
,
**
kargs
):
...
...
@@ -263,3 +262,178 @@ def dic(ctx, scores, test, **kargs):
"""
process
=
bio_figure
.
Dic
(
ctx
,
scores
,
test
,
FUNC_CMC
)
process
.
run
()
@click.command
()
@common_options.scores_argument
(
nargs
=-
1
)
@common_options.output_plot_file_option
(
default_out
=
'
hist.pdf
'
)
@common_options.test_option
()
@common_options.n_bins_option
()
@common_options.criterion_option
()
@common_options.axis_fontsize_option
()
@common_options.thresholds_option
()
@verbosity_option
()
@click.pass_context
def
hist
(
ctx
,
scores
,
test
,
**
kwargs
):
"""
Plots histograms of positive and negatives along with threshold
criterion.
You need provide one or more development score file(s) for each experiment.
You can also provide test files along with dev files but the flag `--test`
is required in that case.
Examples:
$ bob bio hist dev-scores
$ bob bio hist --test dev-scores1 test-scores1 dev-scores2
test-scores2
$ bob bio hist --test --criter hter dev-scores1 test-scores1
"""
process
=
measure_figure
.
Hist
(
ctx
,
scores
,
test
,
FUNC_SPLIT
)
process
.
run
()
@click.command
()
@common_options.scores_argument
(
nargs
=-
1
)
@common_options.titles_option
()
@common_options.sep_dev_test_option
()
@common_options.table_option
()
@common_options.test_option
()
@common_options.output_plot_metric_option
()
@common_options.output_plot_file_option
(
default_out
=
'
eval_plots.pdf
'
)
@common_options.points_curve_option
()
@common_options.fmr_line_at_option
()
@common_options.cost_option
()
@common_options.rank_option
()
@common_options.cmc_option
()
@common_options.bool_option
(
'
metrics
'
,
'
M
'
,
'
If set, computes table of threshold with EER, HTER (and
'
'
FAR, if ``--far-value`` provided.)
'
)
@common_options.far_option
()
@common_options.bool_option
(
'
cllr
'
,
'
x
'
,
'
If given, Cllr and minCllr will be computed.
'
)
@common_options.bool_option
(
'
mindcf
'
,
'
m
'
,
'
If given, minDCF will be computed.
'
)
@common_options.bool_option
(
'
rr
'
,
'
r
'
,
'
If given, the Recognition Rate will be computed.
'
)
@common_options.bool_option
(
'
hist
'
,
'
H
'
,
'
If given, score histograms will be generated.
'
)
@common_options.bool_option
(
'
roc
'
,
'
R
'
,
'
If given, ROC will be generated.
'
)
@common_options.bool_option
(
'
det
'
,
'
D
'
,
'
If given, DET will be generated.
'
)
@common_options.bool_option
(
'
epc
'
,
'
E
'
,
'
If given, EPC will be generated.
'
)
@common_options.bool_option
(
'
dic
'
,
'
O
'
,
'
If given, DIC will be generated.
'
)
@verbosity_option
()
@click.pass_context
def
evaluate
(
ctx
,
scores
,
test
,
**
kwargs
):
'''
Evalutes score file, runs error analysis on score sets and plot curves.
\b
1. Computes the threshold using either EER, min. HTER or FAR value
criteria on development set scores
2. Applies the above threshold on test set scores to compute the HTER, if a
test-score set is provided
3. Computes Cllr and minCllr, minDCF, and recognition rate (if cmc scores
provided)
3. Reports error metrics in the console or in a log file
4. Plots ROC, EPC, DET, score distributions, CMC (if cmc) and DIC (if cmc)
curves to a multi-page PDF file
You need to provide 2 score files for each biometric system in this order:
\b
* development scores
* evaluation scores
Examples:
$ bob bio evaluate dev-scores
$ bob bio evaluate -t -l metrics.txt -o my_plots.pdf dev-scores test-scores
'''
log_str
=
''
if
'
log
'
in
ctx
.
meta
and
ctx
.
meta
[
'
log
'
]
is
not
None
:
log_str
=
'
%s
'
%
ctx
.
meta
[
'
log
'
]
if
ctx
.
meta
[
'
metrics
'
]:
# first time erase if existing file
ctx
.
meta
[
'
open_mode
'
]
=
'
w
'
click
.
echo
(
"
Computing metrics with EER%s...
"
%
log_str
)
ctx
.
meta
[
'
criter
'
]
=
'
eer
'
# no criterion passed to evaluate
ctx
.
invoke
(
metrics
,
scores
=
scores
,
test
=
test
)
# other times, appends the content
ctx
.
meta
[
'
open_mode
'
]
=
'
a
'
click
.
echo
(
"
Computing metrics with HTER%s...
"
%
log_str
)
ctx
.
meta
[
'
criter
'
]
=
'
hter
'
# no criterion passed in evaluate
ctx
.
invoke
(
metrics
,
scores
=
scores
,
test
=
test
)
if
'
far_value
'
in
ctx
.
meta
and
ctx
.
meta
[
'
far_value
'
]
is
not
None
:
click
.
echo
(
"
Computing metrics with FAR=%f%s...
"
%
\
(
ctx
.
meta
[
'
far_value
'
],
log_str
))
ctx
.
meta
[
'
criter
'
]
=
'
far
'
# no criterio % n passed in evaluate
ctx
.
invoke
(
metrics
,
scores
=
scores
,
test
=
test
)
if
ctx
.
meta
[
'
mindcf
'
]:
click
.
echo
(
"
Computing minDCF%s...
"
%
log_str
)
ctx
.
meta
[
'
criter
'
]
=
'
mindcf
'
# no criterion passed in evaluate
ctx
.
invoke
(
metrics
,
scores
=
scores
,
test
=
test
)
if
ctx
.
meta
[
'
cllr
'
]:
click
.
echo
(
"
Computing Cllr and minCllr%s...
"
%
log_str
)
ctx
.
meta
[
'
criter
'
]
=
'
cllr
'
# no criterion passed in evaluate
ctx
.
invoke
(
metrics
,
scores
=
scores
,
test
=
test
)
if
ctx
.
meta
[
'
rr
'
]:
click
.
echo
(
"
Computing recognition rate%s...
"
%
log_str
)
ctx
.
meta
[
'
criter
'
]
=
'
rr
'
# no criterion passed in evaluate
ctx
.
invoke
(
metrics
,
scores
=
scores
,
test
=
test
)
# avoid closing pdf file before all figures are plotted
ctx
.
meta
[
'
closef
'
]
=
False
if
test
:
click
.
echo
(
"
Starting evaluate with dev and test scores...
"
)
else
:
click
.
echo
(
"
Starting evaluate with dev scores only...
"
)
if
ctx
.
meta
[
'
roc
'
]:
click
.
echo
(
"
Generating ROC in %s...
"
%
ctx
.
meta
[
'
output
'
])
ctx
.
forward
(
roc
)
# use class defaults plot settings
if
ctx
.
meta
[
'
det
'
]:
click
.
echo
(
"
Generating DET in %s...
"
%
ctx
.
meta
[
'
output
'
])
ctx
.
forward
(
det
)
# use class defaults plot settings
if
test
and
ctx
.
meta
[
'
epc
'
]:
click
.
echo
(
"
Generating EPC in %s...
"
%
ctx
.
meta
[
'
output
'
])
ctx
.
forward
(
epc
)
# use class defaults plot settings
if
ctx
.
meta
[
'
cmc
'
]:
click
.
echo
(
"
Generating CMC in %s...
"
%
ctx
.
meta
[
'
output
'
])
ctx
.
forward
(
cmc
)
# use class defaults plot settings
if
ctx
.
meta
[
'
dic
'
]:
click
.
echo
(
"
Generating DIC in %s...
"
%
ctx
.
meta
[
'
output
'
])
ctx
.
forward
(
dic
)
# use class defaults plot settings
# the last one closes the file
if
ctx
.
meta
[
'
hist
'
]:
click
.
echo
(
"
Generating score histograms in %s...
"
%
ctx
.
meta
[
'
output
'
])
ctx
.
meta
[
'
criter
'
]
=
'
hter
'
# no criterion passed in evaluate
ctx
.
forward
(
hist
)
ctx
.
meta
[
'
closef
'
]
=
True
#just to make sure pdf is closed
if
'
PdfPages
'
in
ctx
.
meta
:
ctx
.
meta
[
'
PdfPages
'
].
close
()
click
.
echo
(
"
Evaluate successfully completed!
"
)
This diff is collapsed.
Click to expand it.
bob/bio/base/test/test_commands.py
+
64
−
0
View file @
dcc72296
...
...
@@ -238,3 +238,67 @@ def test_dic():
if
result
.
output
:
click
.
echo
(
result
.
output
)
assert
result
.
exit_code
==
0
def
test_evaluate
():
dev1
=
pkg_resources
.
resource_filename
(
'
bob.bio.base.test
'
,
'
data/dev-4col.txt
'
)
dev2
=
pkg_resources
.
resource_filename
(
'
bob.bio.base.test
'
,
'
data/dev-5col.txt
'
)
test1
=
pkg_resources
.
resource_filename
(
'
bob.bio.base.test
'
,
'
data/test-4col.txt
'
)
test2
=
pkg_resources
.
resource_filename
(
'
bob.bio.base.test
'
,
'
data/test-5col.txt
'
)
runner
=
CliRunner
()
with
runner
.
isolated_filesystem
():
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
'
-l
'
,
'
tmp
'
,
'
-f
'
,
0.03
,
'
-M
'
,
'
-x
'
,
'
-m
'
,
dev1
,
dev2
])
assert
result
.
exit_code
==
0
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
'
-f
'
,
0.02
,
'
-M
'
,
'
-x
'
,
'
-m
'
,
dev1
,
dev2
])
assert
result
.
exit_code
==
0
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
'
-l
'
,
'
tmp
'
,
'
-f
'
,
0.04
,
'
-M
'
,
'
-x
'
,
'
-m
'
,
'
-t
'
,
dev1
,
test1
,
dev2
,
test2
])
assert
result
.
exit_code
==
0
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
'
-f
'
,
0.01
,
'
-M
'
,
'
-t
'
,
'
-x
'
,
'
-m
'
,
dev1
,
test1
,
dev2
,
test2
])
assert
result
.
exit_code
==
0
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
dev1
,
dev2
])
assert
result
.
exit_code
==
0
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
'
-R
'
,
'
-D
'
,
'
-H
'
,
'
-E
'
,
'
-o
'
,
'
PLOTS.pdf
'
,
dev1
,
dev2
])
assert
result
.
exit_code
==
0
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
'
-t
'
,
'
-R
'
,
'
-D
'
,
'
-H
'
,
'
-E
'
,
'
-o
'
,
'
PLOTS.pdf
'
,
test1
,
dev1
,
test2
,
dev2
])
assert
result
.
exit_code
==
0
cmc
=
pkg_resources
.
resource_filename
(
'
bob.bio.base.test
'
,
'
data/scores-cmc-4col.txt
'
)
cmc2
=
pkg_resources
.
resource_filename
(
'
bob.bio.base.test
'
,
'
data/scores-cmc-5col.txt
'
)
with
runner
.
isolated_filesystem
():
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
'
-r
'
,
cmc
])
assert
result
.
exit_code
==
0
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
'
-r
'
,
'
-t
'
,
cmc
,
cmc2
])
assert
result
.
exit_code
==
0
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
'
-C
'
,
'
-t
'
,
cmc
,
cmc2
])
assert
result
.
exit_code
==
0
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
'
-C
'
,
cmc
,
cmc2
])
assert
result
.
exit_code
==
0
cmc
=
pkg_resources
.
resource_filename
(
'
bob.bio.base.test
'
,
'
data/scores-cmc-4col-open-set.txt
'
)
cmc2
=
pkg_resources
.
resource_filename
(
'
bob.bio.base.test
'
,
'
data/scores-nonorm-openset-dev
'
)
with
runner
.
isolated_filesystem
():
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
'
-O
'
,
cmc
])
assert
result
.
exit_code
==
0
result
=
runner
.
invoke
(
commands
.
evaluate
,
[
'
-O
'
,
'
-t
'
,
cmc
,
cmc2
])
assert
result
.
exit_code
==
0
This diff is collapsed.
Click to expand it.
setup.py
+
2
−
2
View file @
dcc72296
...
...
@@ -138,14 +138,14 @@ setup(
# bob bio scripts
'
bob.bio.cli
'
:
[
'
annotate = bob.bio.base.script.annotate:annotate
'
,
'
evaluate = bob.bio.base.script.evaluate:evaluate
'
,
'
metrics = bob.bio.base.script.commands:metrics
'
,
'
roc = bob.bio.base.script.commands:roc
'
,
'
det = bob.bio.base.script.commands:det
'
,
'
epc = bob.bio.base.script.commands:epc
'
,
'
hist
= bob.bio.base.script.commands:hist
'
,
'
hist = bob.bio.base.script.commands:hist
'
,
'
cmc = bob.bio.base.script.commands:cmc
'
,
'
dic = bob.bio.base.script.commands:dic
'
,
'
evaluate = bob.bio.base.script.commands:evaluate
'
,
],
# annotators
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment