bob.measure issueshttps://gitlab.idiap.ch/bob/bob.measure/-/issues2022-11-10T10:14:15Zhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/69Switch to new CI/CD configuration2022-11-10T10:14:15ZYannick DAYERSwitch to new CI/CD configurationWe need to adapt this package to the new CI/CD and package format using citools:
- [x] Modify `pyproject.toml`:
- [x] Add information from `setup.py`,
- [x] Add version from `version.txt`,
- [x] Add requirements from `requir...We need to adapt this package to the new CI/CD and package format using citools:
- [x] Modify `pyproject.toml`:
- [x] Add information from `setup.py`,
- [x] Add version from `version.txt`,
- [x] Add requirements from `requirements.txt` and `conda/meta.yaml`,
- [x] Empty `setup.py`:
- Leave the call to `setup()` for compatibility,
- [x] Remove `version.txt`,
- [x] Remove `requirements.txt`,
- [x] Modify `conda/meta.yaml`,
- [x] Import data from `pyproject.toml` (`name`, `version`, ...),
- [x] Add the `source.path` field with value `..`,
- [x] Add the `build.noarch` field with value `python`,
- [x] Edit the `build.script` to only contain `"{{ PYTHON }} -m pip install {{ SRC_DIR }} -vv"`,
- [x] Remove test and documentation commands and comments,
- [x] Modify `.gitlab-ci.yml` to point to citools' `python.yml`,
- Use the fields format instead of the URL,
- [x] Move files to follow the `src` layout:
- [x] the whole `bob` folder to `src/bob/`,
- [x] all the tests in `tests/`,
- [x] the test data files in `tests/data`,
- [x] Edit the tests to load the data correctly, either with `os.path.join(os.path.basename(__file__), "data/xxx.txt")` or `pkg_resources.resource_filename(__name__, "data/xxx.txt")`,
- [x] Activate the `packages` option in `settings -> general -> visibility` in the Gitlab project,
- [x] Edit the latest doc badges to point to the `sphinx` directory in `doc/[...]/master`:
- [x] in README.md,
- [x] in the GitLab project settings,
- [x] Edit the coverage badges to point to the doc's coverage directory:
- [x] in README.md,
- [x] in the GitLab project settings,
- [x] Ensure the CI pipeline passes.
You can look at [bob.learn.em](https://gitlab.idiap.ch/bob/bob.learn.em) for an example of a ported package.Roadmap to the major version of Bob 12https://gitlab.idiap.ch/bob/bob.measure/-/issues/64Partially missing documentation2021-10-29T15:34:56ZLaurent COLBOISPartially missing documentationHi, I noticed the docstrings of many `bob.measure` functions disappeared from the doc between Bob 8 and now, e.g.:
Bob 8
![image](/uploads/30049b39551cd51a5015b20548ee9fd3/image.png)
Current
![image](/uploads/a65659b231b22762783138c1fe...Hi, I noticed the docstrings of many `bob.measure` functions disappeared from the doc between Bob 8 and now, e.g.:
Bob 8
![image](/uploads/30049b39551cd51a5015b20548ee9fd3/image.png)
Current
![image](/uploads/a65659b231b22762783138c1fe541b7e/image.png)
I am pretty suspicious it's linked to the functions that have been wrapped with `@array_jit`, I am guessing the docstring is not transmitted after applying the decorator.
ping @amohammadiAmir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/19load_scores extremely memory hungry2021-06-18T12:58:49ZManuel Günthersiebenkopf@googlemail.comload_scores extremely memory hungryThe new implementation of score loading is memory hungry, as it stores the whole score file in memory. For large score files that have long `client_id`'s and `label`'s, this might easily be too much for a normal desktop machine.
To spli...The new implementation of score loading is memory hungry, as it stores the whole score file in memory. For large score files that have long `client_id`'s and `label`'s, this might easily be too much for a normal desktop machine.
To split the score file into positives and negatives, most of the information (for example, the `label`s) is completely irrelevan.
I remember that I have had this problem with an older version of `bob.measure`, and this is why I have implemented the score reading using a generator function (i.e., `yield`'ing the file line by line) instead of keeping all information of the score file at the same time.
I will provide a better alternative of the 'load_scores' function as a generator function, which does not store the whole score file in memory.Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/63bob measure pure python2021-06-10T11:11:12ZTiago de Freitas Pereirabob measure pure pythonHi @bob,
Following our renovation efforts for Bob, shall we make an effort to port this package to be pure python?
The benefits would be:
- Pure python is more convenient (no platform-dependent) than having a compiled one
- More read...Hi @bob,
Following our renovation efforts for Bob, shall we make an effort to port this package to be pure python?
The benefits would be:
- Pure python is more convenient (no platform-dependent) than having a compiled one
- More readable code. Hence, more people would be willing to contribute
- We would get rid of a blitz dependency (that will die at some point)
The drawbacks:
- We would lose the C++ API (does anyone need that?)
- Extra work
Follow bellow all the functions that would need to be ported.
- bob::measure::farfrr
- bob::measure::precision_recall
- bob::measure::f_score
- bob::measure::correctlyClassifiedPositives
- bob::measure::correctlyClassifiedNegatives
- bob::measure::minimizingThreshold
- bob::measure::eerThreshold
- bob::measure::eerRocch
- bob::measure::minWeightedErrorRateThreshold
- bob::measure::minHterThreshold
- bob::measure::farThreshold
- bob::measure::frrThreshold
- bob::measure::log_values
- bob::measure::meaningfulThresholds(
- bob::measure::roc
- bob::measure::precision_recall_curve
- bob::measure::rocc
- hbob::measure::rocch2eer
- bob::measure::roc_for_far
- bob::measure::ppndf
- bob::measure::det
- bob::measure::epc
Thankshttps://gitlab.idiap.ch/bob/bob.measure/-/issues/61absolute numbers for errors are wrong2020-02-14T16:15:23ZGuillaume HEUSCHabsolute numbers for errors are wrongHi @amohammadi,
As already pointed in the mailing-list (https://groups.google.com/forum/#!topic/bob-devel/RXsX2kgjs1M), the numbers of misclassified and total examples are wrong. Here's an illustration:
```
============== ============...Hi @amohammadi,
As already pointed in the mailing-list (https://groups.google.com/forum/#!topic/bob-devel/RXsX2kgjs1M), the numbers of misclassified and total examples are wrong. Here's an illustration:
```
============== =============== ================
.. Development Evaluation
============== =============== ================
APCER (attack) 7.7% 13.1%
APCER 7.7% 13.1%
BPCER 1.0% 9.7%
ACER 4.3% 11.4%
FTA 0.7% 0.6%
FPR 7.7% (451/1321) 13.1% (749/1571)
FNR 1.0% (13/5880) 9.7% (153/5729)
HTER 4.3% 11.4%
FAR 7.6% 13.0%
FRR 1.7% 10.3%
PRECISION 0.7 0.7
RECALL 1.0 0.9
F1_SCORE 0.8 0.8
```
When you look at FPR on Evaluation set for instance, 749/1571 * 100 = 47.1, which is different from 13.1%. Actually, the total number of examples have been swapped, since 749/5729 * 100 = 13.1 and 153/1571 * 100 = 9.7Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/2The evaluation metric "Area Under Curve" (AUC) is missing2020-02-14T16:15:23ZAndré AnjosThe evaluation metric "Area Under Curve" (AUC) is missing*Created by: tiagofrepereira2012*
Issue copied from here: https://github.com/idiap/bob/issues/181
###########
One well-known evaluation metric for binary classification problems is the Area Under (ROC) Curve: http://en.wikipedia.o...*Created by: tiagofrepereira2012*
Issue copied from here: https://github.com/idiap/bob/issues/181
###########
One well-known evaluation metric for binary classification problems is the Area Under (ROC) Curve: http://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve
It would be nice to have an implementation of it in bob.measure.Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/60ROC plots are not shown correctly sometimes2019-08-19T13:56:42ZAmir MOHAMMADIROC plots are not shown correctly sometimesI have a set of dev and eval scores and currently plotting an roc of them looks like this:
![roc-1](/uploads/4277668b4562f846dd1727b362fde519/roc-1.png)
![roc-2](/uploads/30e83f603e102a8fe46bc0b5d98c5848/roc-2.png)
Here is the command...I have a set of dev and eval scores and currently plotting an roc of them looks like this:
![roc-1](/uploads/4277668b4562f846dd1727b362fde519/roc-1.png)
![roc-2](/uploads/30e83f603e102a8fe46bc0b5d98c5848/roc-2.png)
Here is the command I used: `bin/bob measure roc -vvve scores-{dev,eval} --lines-at 1e-2` and here are the score files: [scores-.zip](/uploads/65459456a61267bb175b7beb555f16cd/scores-.zip)
As you can see the dot in the eval set does not fall on the roc line.Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/58ROC --no-semilogx plots are broken2019-08-19T13:56:42ZAmir MOHAMMADIROC --no-semilogx plots are brokenIn ROC plots, when --no-semilogx is provided the y axis values change from `1-FRR` to `FRR` and the plot looks very similar to a DET curve which is the expected behavior.
However, the dots shown on the plot (activated using --lines-at 1...In ROC plots, when --no-semilogx is provided the y axis values change from `1-FRR` to `FRR` and the plot looks very similar to a DET curve which is the expected behavior.
However, the dots shown on the plot (activated using --lines-at 1e-3,1e-4,...) are still drawn using the `1-FRR` values. Hence the dots do not fall on the ROC plots.
![roc-1](/uploads/9b4f42423d0d6b5b60abc4034f299411/roc-1.png)
![roc-2](/uploads/47172700f57876cae4185373578d6969/roc-2.png)
I have observed this behavior in `bob bio roc` but since it's implemented here, I suspect the bug is in here.https://gitlab.idiap.ch/bob/bob.measure/-/issues/14ERROR: bob.measure.test_scripts.test_compute_cmc2019-06-27T08:05:49ZAndré AnjosERROR: bob.measure.test_scripts.test_compute_cmc*Created by: 183amir*
```
ERROR: bob.measure.test_scripts.test_compute_cmc
----------------------------------------------------------------------
Traceback (most recent call last):
File "/idiap/group/torch5spro/conda/envs/bob-2.3....*Created by: 183amir*
```
ERROR: bob.measure.test_scripts.test_compute_cmc
----------------------------------------------------------------------
Traceback (most recent call last):
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/bob/measure/test_scripts.py", line 78, in test_compu
te_cmc
nose.tools.eq_(main(['--self-test', '--score-file', SCORES_4COL_CMC_OS, '--rank', '1']), 0)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/bob/measure/script/plot_cmc.py", line 104, in main
pp.savefig(fig)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/backends/backend_pdf.py", line 2473, in s
avefig
figure.savefig(self, format='pdf', **kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/figure.py", line 1565, in savefig
self.canvas.print_figure(*args, **kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 2232, in print_fi
gure
**kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/backends/backend_pdf.py", line 2536, in p
rint_pdf
self.figure.draw(renderer)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/artist.py", line 61, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/figure.py", line 1159, in draw
func(*args)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/artist.py", line 61, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/axes/_base.py", line 2324, in draw
a.draw(renderer)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/artist.py", line 61, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/axis.py", line 1120, in draw
self.label.draw(renderer)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/artist.py", line 61, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/text.py", line 749, in draw
bbox, info, descent = textobj._get_layout(renderer)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/backends/backend_pdf.py", line 2[25/3512]
et_text_width_height_descent
renderer=self)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/texmanager.py", line 675, in get_text_wid
th_height_descent
dvifile = self.make_dvi(tex, fontsize)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/texmanager.py", line 422, in make_dvi
report))
RuntimeError: LaTeX was not able to process the following string:
'Detection & Identification Rate in %'
Here is the full report generated by LaTeX:
This is pdfTeX, Version 3.14159265-2.6-1.40.15 (TeX Live 2015/dev/Debian) (preloaded format=latex)
restricted \write18 enabled.
entering extended mode
(./3aed1ad7c64b95333ba5fffb3a1d7a66.tex
LaTeX2e <2014/05/01>
Babel <3.9l> and hyphenation patterns for 2 languages loaded.
(/usr/share/texlive/texmf-dist/tex/latex/base/article.cls
Document Class: article 2014/09/29 v1.4h Standard LaTeX document class
(/usr/share/texlive/texmf-dist/tex/latex/base/size10.clo))
(/usr/share/texlive/texmf-dist/tex/latex/type1cm/type1cm.sty)
(/usr/share/texlive/texmf-dist/tex/latex/psnfss/helvet.sty
(/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty))
(/usr/share/texlive/texmf-dist/tex/latex/psnfss/courier.sty)
(/usr/share/texlive/texmf-dist/tex/latex/base/textcomp.sty
(/usr/share/texlive/texmf-dist/tex/latex/base/ts1enc.def))
(/usr/share/texlive/texmf-dist/tex/latex/geometry/geometry.sty
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifpdf.sty)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifvtex.sty)
(/usr/share/texlive/texmf-dist/tex/generic/ifxetex/ifxetex.sty)
Package geometry Warning: Over-specification in `h'-direction.
`width' (5058.9pt) is ignored.
Package geometry Warning: Over-specification in `v'-direction.
`height' (5058.9pt) is ignored.
)
No file 3aed1ad7c64b95333ba5fffb3a1d7a66.aux.
(/usr/share/texlive/texmf-dist/tex/latex/base/ts1cmr.fd)
(/usr/share/texlive/texmf-dist/tex/latex/psnfss/ot1pnc.fd)
*geometry* driver: auto-detecting
*geometry* detected driver: dvips
! Misplaced alignment tab character &.
l.12 ...8.000000}{22.500000}{\rmfamily Detection &
Identification Rate in %}
[1] (./3aed1ad7c64b95333ba5fffb3a1d7a66.aux) )
(\end occurred inside a group at level 1)
### simple group (level 1) entered at line 12 ({)
### bottom level
(see the transcript file for additional information)
Output written on 3aed1ad7c64b95333ba5fffb3a1d7a66.dvi (1 page, 276 bytes).
Transcript written on 3aed1ad7c64b95333ba5fffb3a1d7a66.log.
```Tiago de Freitas PereiraTiago de Freitas Pereirahttps://gitlab.idiap.ch/bob/bob.measure/-/issues/50Missing option to make double-histograms larger or higher2019-06-24T14:40:56ZAndré AnjosMissing option to make double-histograms larger or higherCurrently, when one plots with `--eval`, it results in a situation like this:
![idiap-full](/uploads/dede71ba3bafd45bbdb4e2c3698fb420/idiap-full.png)
While this is nice if you want to squeeze the histograms into a single square block, ...Currently, when one plots with `--eval`, it results in a situation like this:
![idiap-full](/uploads/dede71ba3bafd45bbdb4e2c3698fb420/idiap-full.png)
While this is nice if you want to squeeze the histograms into a single square block, there should an option to "enlarge" the canvas, so that the histograms are not "half-size" in the compression direction. The same goes if the histograms are stacked vertically instead.
Another option that would be cool would be to separate each histogram in a single page, using a multi-page PDF or various output files (if the format is PNG or JPEG). This would allow more plasticity when generating plots for papers.https://gitlab.idiap.ch/bob/bob.measure/-/issues/51How to enable a grid on histograms/plots?2019-06-24T14:29:53ZAndré AnjosHow to enable a grid on histograms/plots?My histograms and plots now come out like these:
![idiap-full](/uploads/5aee107437a8999bc06f00ac206bd91d/idiap-full.png)
I wonder if there is an option to activate the "grid" on these, like in this one:
![Screen_Shot_2018-06-27_at_09....My histograms and plots now come out like these:
![idiap-full](/uploads/5aee107437a8999bc06f00ac206bd91d/idiap-full.png)
I wonder if there is an option to activate the "grid" on these, like in this one:
![Screen_Shot_2018-06-27_at_09.15.58](/uploads/b35a253fa7c8058a8c0c680f0e1cfe8b/Screen_Shot_2018-06-27_at_09.15.58.png)https://gitlab.idiap.ch/bob/bob.measure/-/issues/36Functions in submodule `load` cannot handle identities or labels with spaces ...2019-06-23T17:12:35ZAndré AnjosFunctions in submodule `load` cannot handle identities or labels with spaces on themThe following score file, for example, cannot be correctly loaded:
```text
id1 id1 name with spaces 1.0
id1 id2 another name with spaces 0.0
```
Inside the submodule `load.py` in this package, we use `csv.read` to read the file content...The following score file, for example, cannot be correctly loaded:
```text
id1 id1 name with spaces 1.0
id1 id2 another name with spaces 0.0
```
Inside the submodule `load.py` in this package, we use `csv.read` to read the file contents. Therefore, I was expecting the following change to work properly:
```text
id1 id1 "name with spaces" 1.0
id1 id2 "another name with spaces" 0.0
```
However, that does not work either because of the way `_estimate_score_file_format()` works - i.e., by using another function to figure out the number of columns than `csv.read`.
If you have suggestions on how to correctly handle this, they are welcome!https://gitlab.idiap.ch/bob/bob.measure/-/issues/59MinDCF problem in negative sets with outlier scores2019-02-22T14:14:38ZSaeed SARFJOOMinDCF problem in negative sets with outlier scoresWhen we have an outlier score in negative set, the selected threshold in `bob.measure.min_weighted_error_rate_threshold` function is wrong. For example:
``` python
from bob.measure import min_weighted_error_rate_threshold, farfrr
co...When we have an outlier score in negative set, the selected threshold in `bob.measure.min_weighted_error_rate_threshold` function is wrong. For example:
``` python
from bob.measure import min_weighted_error_rate_threshold, farfrr
cost = 0.99
negatives = [-3, -2, -1, -0.5, 4]
positives = [0.5, 3]
th = min_weighted_error_rate_threshold(negatives, positives, cost, True)
print("threshold: " + str(th))
far, frr = farfrr(negatives, positives, th)
mindcf = (cost*far + (1-cost)*frr)*100
print ("minDCF : " + str(mindcf))
```
In this condition the output will be:
```
threshold: 0.0
minDCF : 19.8
```
minDCF can not be more than 1. In this condition a threshold higher than maximum score must be chosen. e.g., with threshold 5 minDCF will be 1.Saeed SARFJOOSaeed SARFJOOhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/52Provide an option to change the precision of floating points when they are pr...2019-01-16T14:45:19ZAmir MOHAMMADIProvide an option to change the precision of floating points when they are printedCurrently floats are rounded to 1 floating point and printed.
It would be best to make this an option on the command lineCurrently floats are rounded to 1 floating point and printed.
It would be best to make this an option on the command lineTheophile GENTILHOMMETheophile GENTILHOMMEhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/57The test bob.measure.test_script.test_hist_legends stuck the nightlies2018-10-16T08:23:27ZTiago de Freitas PereiraThe test bob.measure.test_script.test_hist_legends stuck the nightliesVery often the test `bob.measure.test_script.test_hist_legends` get stuck in the nightlies build no matter the platform.
Check:
- https://gitlab.idiap.ch/bob/bob.nightlies/-/jobs/150561
- https://gitlab.idiap.ch/bob/bob.nightlies/-/jo...Very often the test `bob.measure.test_script.test_hist_legends` get stuck in the nightlies build no matter the platform.
Check:
- https://gitlab.idiap.ch/bob/bob.nightlies/-/jobs/150561
- https://gitlab.idiap.ch/bob/bob.nightlies/-/jobs/150497
- https://gitlab.idiap.ch/bob/bob.nightlies/-/jobs/150312https://gitlab.idiap.ch/bob/bob.measure/-/issues/46This package should only report, by default, generic metrics2018-10-16T06:35:54ZAndré AnjosThis package should only report, by default, generic metrics> This issue was opened from the discussion on #38. In order to make bob more useful for other tasks unrelated to biometrics, it would be beneficial to address this issue.
To reply correctly, I need to know if there are any practical di...> This issue was opened from the discussion on #38. In order to make bob more useful for other tasks unrelated to biometrics, it would be beneficial to address this issue.
To reply correctly, I need to know if there are any practical differences between FAR and FMR, and FRR and FNMR. If there aren't, I would report, on a given threshold:
* FPR -> `False Positive Rate` (spell it out so there are no confusions)
* FNR -> `False Negative Rate`
* Precision
* Recall
* F1-Score
And that is it
An option could allow you, for example, to replace the values above by something more digestible for biometrics, say `--biometrics` and then the program prints:
* False Acceptance Rate
* False Rejection Rate
* Half-Total Error Rate
The thresholding should also be configurable if that is not already the case: it should be possible to say "report me all values when FPR is set to 10%" or "report me all values when FAR is set to 0.01%" or "report me all values at the Equal-Error Rate":
* `bob measure metrics dev-1.txt` (as per above, use Equal-Error Rate to calculate the threshold, that should be also reported)
* `bob measure metrics --biometrics dev-1.txt` (as per above)
* `bob measure metrics --far=0.0001 --biometrics dev-1.txt` (reports values for an FAR of 0.0001 (0.01%), no minimisation takes place)
* `bob measure metrics --criterion=minhter dev-1.txt` (reports values using FPR/FNR terminology with threshold calculated by minimizing the HTER on the set)
For obvious reasons, options such as `--criterion` and `--far` should be mutually exclusive. As it is currently coded, it is confusing that you should pass `--criterion=far --far-value=0.0001`. It would be easier to say `--far=0.0001` and that is it. If the user passes both, then an error is raised.
It is important that this program is very clear about metrics being used, so I would avoid any acronyms during the error reporting. It is OK to have acronyms on the option names, but documentation should be explicit.
Does that sound reasonable?Theophile GENTILHOMMETheophile GENTILHOMMEhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/25Moving biometrics-related functionality to bob.bio.base2018-10-16T06:34:31ZAndré AnjosMoving biometrics-related functionality to bob.bio.baseAs Bob keeps its trajectory to serve more types of systems, it becomes less and less obvious to keep biometrics-related functionality inside this package. I'm proposing we move those into `bob.bio.base`, which is the place they should ha...As Bob keeps its trajectory to serve more types of systems, it becomes less and less obvious to keep biometrics-related functionality inside this package. I'm proposing we move those into `bob.bio.base`, which is the place they should have been in the first place.
A few things that come to mind:
* All score loading/saving functionality
* OpenBR exchange support
* The scripts, which are tunned for Biometrics-style reporting (and can only load biometric score files)
* Not sure about all the identification stuff, maybe generic enough to keep here?
Thanks for your feedback.May 2017 HackathonGuillaume HEUSCHGuillaume HEUSCHhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/38Harmonisation of performance reporting2018-10-16T06:34:03ZSébastien MARCELHarmonisation of performance reportingthis is a duplicate of a discussion on the biometric ML in Jan 2017 -- where Guillaume Heusch @heusch agreed to lead this as well as with the help of Hannah @hmuckenhirn
currently we are going to get some support from the devel team led...this is a duplicate of a discussion on the biometric ML in Jan 2017 -- where Guillaume Heusch @heusch agreed to lead this as well as with the help of Hannah @hmuckenhirn
currently we are going to get some support from the devel team led by Samuel @samuel.gaist so it will be good to synch with him as well
---
I have the impression that we are still a little bit behind with respect to the harmonisation of performance reporting as discussed before the Bob refactoring last year.
We are still reporting errors rates and plots with FRR/FAR/SFAR and EER and inconsistently FMR/FNMR/IAPRM/ACPER …
We should converge with an harmonisation following current practices that follow more and more ISO.
I know all the elements are in our hands (Tiago for CMCs, Amir for IAPMR and nice scatter plots with decision, …). See some examples attached.
We need a documented package with examples, on how to produce from a set of scores produced by our biometric and PAD experiments, that anyone can use to report results.
More particularly, we need to use
* FNMR(or GMR=1-FNMR) vs FMR instead of FAR/FRR when we report biometric performance (authentication task) in tables (FNMR @ FMR=0.1% or smaller), DET and ROC (EPC case to be discussed)
* TPIR/rank when we report biometric performance (identification task) in tables (TPIR @ FPIR=0.1%) and CMC
* nice bar plots of score distributions for biometric recognition (Genuine, Zero-effort Impostor)
* nice bar plots of score distributions for biometric recognition and PA (Genuine, Zero-effort Impostor, PA) with IAPMR
* ACPER/BPCER instead of FAR/FRR when we report PAD performance in tables, DET and ROC
* nice bar plot of score distributions for PAD (BonaFide, PA)
* EPSC for biometric recognition and PAD
* scatter plots for bi-modal biometric recognition
* scatter plots for biometric recognition and PAD
Additionally we would need a routine to compute the statistical significance.
a summary of these performance reporting is provided in the attached document (section 4) prepared with our SWAN partners along with references to ISO documents (that can also be found in our biometrics group directory /idiap/group/biometric/standards/ISO-IEC/ eg. ISO-IEC-19795-1 ).
[EPSC_HTER_w-cnn-motion-fusion.pdf](/uploads/a7cf4621ebc2f1a683877d6ae1841f92/EPSC_HTER_w-cnn-motion-fusion.pdf)
[EPSC_IAPMR_w-cnn-motion-fusion.pdf](/uploads/961054f80567b26f97952bf33bb85b58/EPSC_IAPMR_w-cnn-motion-fusion.pdf)
[gmm_score_distribution_fixed.pdf](/uploads/6ef5976e51f7e896e24000c668944b73/gmm_score_distribution_fixed.pdf)
[ISV_gaussians.pdf](/uploads/a26d62fc94c0c2718a3f952a3d2f7720/ISV_gaussians.pdf)
[TR1-v3-20160930.pdf](/uploads/a8f4dc3ad96cd68576c901f85017b350/TR1-v3-20160930.pdf)
also a nice reference is to look at NIST FRVT ( https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt ) eg. http://ws680.nist.gov/publication/get_pdf.cfm?pub_id=915761 and best practices document attached
in interestingly in ROC/DET when they compare systems they draw lines between points with same threshold !
[060405-BestPractices_v2_1.pdf](/uploads/28acaaea9812fa993bd54351acee8cf6/060405-BestPractices_v2_1.pdf)
SébastienGuillaume HEUSCHGuillaume HEUSCHhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/56Limitations of matplotlib's constrained layout are not taken into account2018-08-27T14:55:54ZAmir MOHAMMADILimitations of matplotlib's constrained layout are not taken into accountPlease see: https://matplotlib.org/tutorials/intermediate/constrainedlayout_guide.html#limitations
Basically we should not be using `plt.subplot` and should be using `GridSpec`.
I think this was recently added to the documentation so w...Please see: https://matplotlib.org/tutorials/intermediate/constrainedlayout_guide.html#limitations
Basically we should not be using `plt.subplot` and should be using `GridSpec`.
I think this was recently added to the documentation so we didn't know before.
Another thing that we did not know is that `plt.subplots_adjust` and `plt.tight_layout` should not be used when constrained_layout is True.
In matplotlib 2.2.3 now constrained_layout is automatically is set to False if either of those is used. See: https://github.com/matplotlib/matplotlib/pull/11588/files
I think we should rethink our usage to make sure we don't go out of supported bounds of matplotlib's constrained layout. Probably then, we can remove a lot of hacks that we had to do.Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/54Documentation issues2018-07-17T09:11:36ZAndré AnjosDocumentation issuesCurrently, we're using FRR/FAR as information about FNR/FPR, which are the ML de-facto standards.
It would be good to scan the documentation of this package as well as function/class docs and change occurrences of FRR to FNR and FAR to ...Currently, we're using FRR/FAR as information about FNR/FPR, which are the ML de-facto standards.
It would be good to scan the documentation of this package as well as function/class docs and change occurrences of FRR to FNR and FAR to FPR.Theophile GENTILHOMMETheophile GENTILHOMME