Skip to content
GitLab
Projects
Groups
Snippets
/
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
bob
bob.pad.base
Commits
5bae9b02
Commit
5bae9b02
authored
Jun 06, 2018
by
Theophile GENTILHOMME
Browse files
[pad_commands,vuln_commands,test_commands,experiments] Modifications
related to --eval option default change from True to False
parent
231f65b7
Pipeline
#20854
passed with stage
in 42 minutes and 11 seconds
Changes
4
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
bob/pad/base/script/pad_commands.py
View file @
5bae9b02
...
...
@@ -65,17 +65,17 @@ def roc(ctx, scores, evaluation, **kargs):
computed using :py:func:`bob.measure.roc`.
You need to provide one or more development score file(s) for each
experiment. You can also provide eval files along with dev files. If
only
d
ev
-
scores are used, the flag `--
no-
eval
uation
` must be used. is required
experiment. You can also provide eval files along with dev files. If
ev
aluation
scores are used, the flag `--eval` must be used. is required
in that case. Files must be 4-col format, see
:py:func:`bob.bio.base.score.load.four_column`
Examples:
$ bob pad roc -v dev-scores
$ bob pad roc -v dev-scores1 eval-scores1 dev-scores2
$ bob pad roc
-e
-v dev-scores1 eval-scores1 dev-scores2
eval-scores2
$ bob pad roc -v -o my_roc.pdf dev-scores1 eval-scores1
$ bob pad roc
-e
-v -o my_roc.pdf dev-scores1 eval-scores1
"""
process
=
figure
.
Roc
(
ctx
,
scores
,
evaluation
,
load
.
split
)
process
.
run
()
...
...
@@ -109,15 +109,15 @@ def det(ctx, scores, evaluation, **kargs):
(false positives on the x-axis and false negatives on the y-axis)
You need to provide one or more development score file(s) for each
experiment. You can also provide eval files along with dev files. If
only
d
ev-scores are used, the flag `--
no-
eval
uation
` must be used. is required
experiment. You can also provide eval files along with dev files. If
ev
ale
-scores are used, the flag `--eval` must be used. is required
in that case. Files must be 4-col format, see
:py:func:`bob.bio.base.score.load.four_column` for details.
Examples:
$ bob pad det -v dev-scores eval-scores
$ bob pad det -v scores-{dev,eval}
$ bob pad det
-e
-v scores-{dev,eval}
"""
process
=
figure
.
DetPad
(
ctx
,
scores
,
evaluation
,
load
.
split
)
process
.
run
()
...
...
@@ -130,6 +130,7 @@ def det(ctx, scores, evaluation, **kargs):
@
common_options
.
eval_option
()
@
common_options
.
n_bins_option
()
@
common_options
.
criterion_option
()
@
common_options
.
no_line_option
()
@
common_options
.
far_option
()
@
common_options
.
thresholds_option
()
@
common_options
.
const_layout_option
()
...
...
@@ -147,7 +148,7 @@ def hist(ctx, scores, evaluation, **kwargs):
You need to provide one or more development score file(s) for each
experiment. You can also provide eval files along with dev files. If only
d
ev
scores
are provided, you must use flag `--
no-
eval
uation
`.
ev
aluation
are provided, you must use flag `--eval`.
By default, when eval-scores are given, only eval-scores histograms are
displayed with threshold line
...
...
@@ -156,10 +157,10 @@ def hist(ctx, scores, evaluation, **kwargs):
Examples:
$ bob pad hist -v dev-scores
$ bob pad hist -v dev-scores1 eval-scores1 dev-scores2
$ bob pad hist
-e
-v dev-scores1 eval-scores1 dev-scores2
eval-scores2
$ bob pad hist -v --criterion min-hter dev-scores1 eval-scores1
$ bob pad hist
-e
-v --criterion min-hter dev-scores1 eval-scores1
"""
process
=
figure
.
HistPad
(
ctx
,
scores
,
evaluation
,
load
.
split
)
process
.
run
()
...
...
@@ -282,15 +283,17 @@ def evaluate(ctx, scores, evaluation, **kwargs):
* development scores
* evaluation scores
When evaluation scores are provided, ``--eval`` must be passed.
Examples:
$ bob pad evaluate -v dev-scores
$ bob pad evaluate -v scores-dev1 scores-eval1 scores-dev2
$ bob pad evaluate
-e
-v scores-dev1 scores-eval1 scores-dev2
scores-eval2
$ bob pad evaluate -v /path/to/sys-{1,2,3}/scores-{dev,eval}
$ bob pad evaluate
-e
-v /path/to/sys-{1,2,3}/scores-{dev,eval}
$ bob pad evaluate -v -l metrics.txt -o my_plots.pdf dev-scores eval-scores
$ bob pad evaluate
-e
-v -l metrics.txt -o my_plots.pdf dev-scores eval-scores
'''
# first time erase if existing file
click
.
echo
(
"Computing metrics..."
)
...
...
bob/pad/base/script/vuln_commands.py
View file @
5bae9b02
...
...
@@ -310,9 +310,8 @@ def epsc(ctx, scores, criteria, var_param, fixed_param, three_d, sampling,
@
click
.
command
()
@
common_options
.
scores_argument
(
nargs
=-
1
,
min_arg
=
2
)
@
common_options
.
scores_argument
(
nargs
=-
1
,
min_arg
=
2
,
force_eval
=
True
)
@
common_options
.
output_plot_file_option
(
default_out
=
'vuln_hist.pdf'
)
@
common_options
.
eval_option
()
@
common_options
.
n_bins_option
()
@
common_options
.
criterion_option
()
@
common_options
.
thresholds_option
()
...
...
@@ -333,7 +332,7 @@ def epsc(ctx, scores, criteria, var_param, fixed_param, three_d, sampling,
@
common_options
.
style_option
()
@
verbosity_option
()
@
click
.
pass_context
def
hist
(
ctx
,
scores
,
evaluation
,
**
kwargs
):
def
hist
(
ctx
,
scores
,
**
kwargs
):
'''Vulnerability analysis distributions.
Plots the histogram of score distributions. You need to provide 4 score
...
...
@@ -348,15 +347,10 @@ def hist(ctx, scores, evaluation, **kwargs):
See :ref:`bob.pad.base.vulnerability` in the documentation for a guide on
vulnerability analysis.
You need to provide one or more development score file(s) for each
experiment. You can also provide eval files along with dev files. If only
dev-scores are used set the flag `--no-evaluation` is required in that
case.
By default, when eval-scores are given, only eval-scores histograms are
displayed with threshold line
computed from dev-scores. If you want to display dev-scores distributions
as well, use ``--show-dev`` option.
computed from dev-scores.
Examples:
...
...
@@ -365,14 +359,13 @@ def hist(ctx, scores, evaluation, **kwargs):
$ bob vuln vuln_hist -v {licit,spoof}/scores-{dev,eval}
'''
process
=
figure
.
HistVuln
(
ctx
,
scores
,
evaluation
,
load
.
split
)
process
=
figure
.
HistVuln
(
ctx
,
scores
,
True
,
load
.
split
)
process
.
run
()
@
click
.
command
(
context_settings
=
dict
(
token_normalize_func
=
lambda
x
:
x
.
lower
()))
@
common_options
.
scores_argument
(
min_arg
=
2
,
force_eval
=
True
,
nargs
=-
1
)
@
common_options
.
eval_option
()
@
common_options
.
table_option
()
@
common_options
.
criterion_option
(
lcriteria
=
[
'bpcer20'
,
'eer'
,
'min-hter'
])
@
common_options
.
thresholds_option
()
...
...
bob/pad/base/test/test_commands.py
View file @
5bae9b02
...
...
@@ -13,7 +13,7 @@ def test_det_pad():
'data/licit/scores-eval'
)
runner
=
CliRunner
()
with
runner
.
isolated_filesystem
():
result
=
runner
.
invoke
(
pad_commands
.
det
,
[
'--output'
,
result
=
runner
.
invoke
(
pad_commands
.
det
,
[
'-e'
,
'--output'
,
'DET.pdf'
,
licit_dev
,
licit_test
])
assert
result
.
exit_code
==
0
,
(
result
.
exit_code
,
result
.
output
)
...
...
@@ -76,19 +76,20 @@ def test_hist_pad():
'data/spoof/scores-eval'
)
runner
=
CliRunner
()
with
runner
.
isolated_filesystem
():
result
=
runner
.
invoke
(
pad_commands
.
hist
,
[
'--no-evaluation'
,
licit_dev
])
result
=
runner
.
invoke
(
pad_commands
.
hist
,
[
licit_dev
])
assert
result
.
exit_code
==
0
,
(
result
.
exit_code
,
result
.
output
)
with
runner
.
isolated_filesystem
():
result
=
runner
.
invoke
(
pad_commands
.
hist
,
[
'--criterion'
,
'min-hter'
,
'--output'
,
'HISTO.pdf'
,
'-b'
,
'30,auto'
,
'--no-evaluation'
,
'30,auto'
,
licit_dev
,
spoof_dev
])
assert
result
.
exit_code
==
0
,
(
result
.
exit_code
,
result
.
output
)
with
runner
.
isolated_filesystem
():
result
=
runner
.
invoke
(
pad_commands
.
hist
,
[
'--criterion'
,
'eer'
,
'--output'
,
result
=
runner
.
invoke
(
pad_commands
.
hist
,
[
'-e'
,
'--criterion'
,
'eer'
,
'--output'
,
'HISTO.pdf'
,
'-b'
,
'30'
,
licit_dev
,
licit_test
,
spoof_dev
,
spoof_test
])
...
...
@@ -145,7 +146,7 @@ def test_metrics_pad():
with
runner
.
isolated_filesystem
():
result
=
runner
.
invoke
(
pad_commands
.
metrics
,
[
licit_dev
,
licit_test
]
[
'-e'
,
licit_dev
,
licit_test
]
)
assert
result
.
exit_code
==
0
,
(
result
.
exit_code
,
result
.
output
)
...
...
@@ -175,7 +176,6 @@ def test_epc_vuln():
assert
result
.
exit_code
==
0
,
(
result
.
exit_code
,
result
.
output
)
def
test_epsc_vuln
():
licit_dev
=
pkg_resources
.
resource_filename
(
'bob.pad.base.test'
,
'data/licit/scores-dev'
)
...
...
doc/experiments.rst
View file @
5bae9b02
...
...
@@ -137,7 +137,7 @@ For example:
.. code-block:: sh
$ bob pad metrics scores-{dev,eval} --legends ExpA
$ bob pad metrics
-e
scores-{dev,eval} --legends ExpA
Threshold of 11.639561 selected with the bpcer20 criteria
====== ======================== ===================
...
...
@@ -167,14 +167,14 @@ For example:
====== ======================== ===================
.. note::
You ca
n co
mpute analysis on development set(s) only
b
y
pass
ing option
``--no-evaluation``.
See metrics --help for further options.
When evaluatio
n
s
co
res are provided, the ``--eval`` option must
b
e
pass
ed.
See metrics --help for further options.
Metrics for vulnerability analysis are also avaible trhough:
.. code-block:: sh
$ bob vuln metrics .../{licit,spoof}/scores-{dev,test}
$ bob vuln metrics
-e
.../{licit,spoof}/scores-{dev,test}
========= ===================
None EER (threshold=4)
...
...
@@ -234,7 +234,7 @@ For example, to generate a EPC curve from development and evaluation datasets:
.. code-block:: sh
$bob pad epc -o 'my_epc.pdf' scores-{dev,eval}
$bob pad epc
-e
-o 'my_epc.pdf' scores-{dev,eval}
where `my_epc.pdf` will contain EPC curves for all the experiments.
...
...
@@ -243,7 +243,7 @@ datasets. Far example, to generate EPSC curve:
.. code-block:: sh
$bob vuln epsc .../{licit,spoof}/scores-{dev,eval}
$bob vuln epsc
-e
.../{licit,spoof}/scores-{dev,eval}
.. note::
...
...
Write
Preview
Supports
Markdown
0%
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment