bob.measure issueshttps://gitlab.idiap.ch/bob/bob.measure/-/issues2017-09-05T15:08:34Zhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/31Build Failed #868352017-09-05T15:08:34ZAndré AnjosBuild Failed #86835@mguenther: I've tried this build twice, and it seems your changes are still failing on Python 3.6 + Mac OSX. Could you please check? https://gitlab.idiap.ch/bob/bob.measure/-/jobs/86835
@tiago.pereira: since we'll be in release mode fr...@mguenther: I've tried this build twice, and it seems your changes are still failing on Python 3.6 + Mac OSX. Could you please check? https://gitlab.idiap.ch/bob/bob.measure/-/jobs/86835
@tiago.pereira: since we'll be in release mode from thursday on, could you please check/help Manuel on this one? He may not have access to a Mac for tests.Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/30Match scores not split up into "genuine" and "impostor" categories correctly ...2017-08-23T16:03:15ZVedrana KRIVOKUCAMatch scores not split up into "genuine" and "impostor" categories correctly for 1vsall UTFVP database protocolThe 1vsall UTFVP database protocol is supposed to compare every fingervein sample to every other sample. The following is an excerpt from the score file:
10_1_1 59_4 0059/0059_4_1_120511-152502 0.23330404
As can be seen above, the mod...The 1vsall UTFVP database protocol is supposed to compare every fingervein sample to every other sample. The following is an excerpt from the score file:
10_1_1 59_4 0059/0059_4_1_120511-152502 0.23330404
As can be seen above, the model ID (10_1_1) and probe ID (59_4) are not in the same format. In particular, the model ID contains client ID + session/finger sample ID, while the probe ID only contains the client ID. This is a problem, because the "_split_scores" function in bob.measure.load, whose purpose is to divide the match scores into "genuine" and "impostor" categories, compares model ID to probe ID directly. This means that model ID will never match probe ID, the consequence of which is that the function will place all match scores into the "impostor" category and we will get zero "genuine" scores. Of course, if we then try to do any evaluation/plotting, the result will be wrong.André AnjosAndré Anjoshttps://gitlab.idiap.ch/bob/bob.measure/-/issues/29CMC documentation os not consistent with the reality2017-09-01T02:08:25ZTiago de Freitas PereiraCMC documentation os not consistent with the realityThe documentation of the function `bob.measure.cmc` (http://pythonhosted.org/bob.measure/py_api.html#bob.measure.cmc) says that the output is
"A 2D float array representing the CMC curve, with the Rank in the first column and the numbe...The documentation of the function `bob.measure.cmc` (http://pythonhosted.org/bob.measure/py_api.html#bob.measure.cmc) says that the output is
"A 2D float array representing the CMC curve, with the Rank in the first column and the number of correctly classified clients (in this rank) in the second column."
This is not what is happening.
The function returns a 1D array representing the CMC curve, with the identification rate sorted by the rank in ascending order.https://gitlab.idiap.ch/bob/bob.measure/-/issues/28Updated API in `bob.math` requires update in this package, too2017-06-30T10:08:32ZManuel Günthersiebenkopf@googlemail.comUpdated API in `bob.math` requires update in this package, tooSee bob/bob.math!13See bob/bob.math!13https://gitlab.idiap.ch/bob/bob.measure/-/issues/27FAR and FRR thresholds are computed even when there is no data support2018-02-19T18:37:26ZManuel Günthersiebenkopf@googlemail.comFAR and FRR thresholds are computed even when there is no data supportI have lately come across a situation, where FAR (and FRR) thresholds were computed, although they should not have been.
Imagine the negative score distribution `[0.5, 0.6, 0.7, 0.8, 0.9, 1., 1., 1., 1., 1.]`. A threshold should now be c...I have lately come across a situation, where FAR (and FRR) thresholds were computed, although they should not have been.
Imagine the negative score distribution `[0.5, 0.6, 0.7, 0.8, 0.9, 1., 1., 1., 1., 1.]`. A threshold should now be computed for `FAR=0.1`. Our current implementation of `bob.measure.far_threshold` will return the threshold `1`. However, this threshold does not give us a false acceptance rate of `0.1`, but of `0.5`. In fact, there is no (data-driven) threshold that would provide a false acceptance rate of `0.1`.
A similar issue arises, when the number of data points is not sufficient for a given threshold to be computed.
From only 10 data points, you cannot provide a (data-driven) threshold for `FAR=0.05`, while our current implementation happily provides one.
There are two possible solutions for this issue.
First, we can simply return a threshold that is *just slightly higher* than the largest negative (or slightly lower than the largest positive when computing FRR threshold). This will indeed provide a solution, but this is not justified by data point and might be arbitrarily wrong, i.e., when applied to other test data.
Instead, we should just return `NaN`, since we really cannot compute a justified threshold for the requested FAR or FRR values.May 2017 HackathonAmir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/26ROC and DET plots are calculated incorrectly sometimes2018-01-16T16:56:34ZAmir MOHAMMADIROC and DET plots are calculated incorrectly sometimesFollowing the discussion here: https://groups.google.com/forum/#!topic/bob-devel/EIp1nvw5-vQ
Looks like we have a corner case where the scores have a very large peak in their distribution:
![hist_data](/uploads/ee0071522224130f9fa7be...Following the discussion here: https://groups.google.com/forum/#!topic/bob-devel/EIp1nvw5-vQ
Looks like we have a corner case where the scores have a very large peak in their distribution:
![hist_data](/uploads/ee0071522224130f9fa7be7b8508f2e4/hist_data.png)
The scores are also available: [fusion_all_200_datatset.npy](/uploads/fa88359dfdb4d66cdbc54f3e4a677149/fusion_all_200_datatset.npy)
To load them:
```python
>>> scores = numpy.load('fusion_all_200_datatset.npy')
... positives = scores[0]
... negatives = scores[1]
# The negatives are mostly 0
>>> sum(negatives < 9e-16)
51911
>>> sum(negatives < 9e-17)
51525
>>> sum(negatives < 9e-18)
51029
>>> sum(negatives < 9e-20)
49675
>>> sum(negatives < 9e-22)
47543
>>> sum(negatives < 9e-30)
27487
>>> sum(negatives < 9e-60)
0
```
@tiago.pereira may be able to provide a set of smaller scores to debug this.
When I calculate the EER, I get `2.7%`:
```python
>>> scores = numpy.load('fusion_all_200_datatset.npy')
... positives = scores[0]
... negatives = scores[1]
...
>>> threshold = bob.measure.eer_threshold(negatives, positives)
... FAR, FRR = bob.measure.farfrr(negatives, positives, threshold)
...
>>> FAR, FRR
(0.02762483029114497, 0.027626084163186636)
>>> negatives.mean(), negatives.std(), positives.mean(), positives.std()
(3.0290521114232657e-06, 0.00067550514259906739, 0.20795613959959214, 0.32512541269459205)
>>> 100*(FAR+FRR)/2
2.76254572271658
>>> bob.measure.plot.roc(negatives, positives, npoints)
[<matplotlib.lines.Line2D object at 0x7f472a8bed10>]
>>> bob.measure.plot.det(negatives, positives, npoints, color=(0,0,0), linestyle='-', label='test')
... bob.measure.plot.det_axis([0.1, 80, 0.1, 80])
...
[-3.090232246772911, 0.8416212348748217, -3.090232246772911, 0.8416212348748217]
# plot the EER point on the DET curve
>>> pyplot.plot(bob.measure.ppndf(FAR), bob.measure.ppndf(FAR), 'ro')
[<matplotlib.lines.Line2D object at 0x7f472a4dc510>]
```
But when I plot the ROC and DET curves, I get curves with EER of `6%` or more than `20%`:
DET CURVE with around `6%` EER:
![wrong_det](/uploads/6c8c93c1c994f0c256ca1b90087a1484/wrong_det.png)
ROC CURVE with more than `20%` EER:
![wrong_roc](/uploads/4730c633a9c7a6ae0ef0d56227e79e41/wrong_roc.png)May 2017 HackathonAmir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/25Moving biometrics-related functionality to bob.bio.base2018-10-16T06:34:31ZAndré AnjosMoving biometrics-related functionality to bob.bio.baseAs Bob keeps its trajectory to serve more types of systems, it becomes less and less obvious to keep biometrics-related functionality inside this package. I'm proposing we move those into `bob.bio.base`, which is the place they should ha...As Bob keeps its trajectory to serve more types of systems, it becomes less and less obvious to keep biometrics-related functionality inside this package. I'm proposing we move those into `bob.bio.base`, which is the place they should have been in the first place.
A few things that come to mind:
* All score loading/saving functionality
* OpenBR exchange support
* The scripts, which are tunned for Biometrics-style reporting (and can only load biometric score files)
* Not sure about all the identification stuff, maybe generic enough to keep here?
Thanks for your feedback.May 2017 HackathonGuillaume HEUSCHGuillaume HEUSCHhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/24Bug in ROC curves2016-11-29T17:06:37ZChristopher FINELLIBug in ROC curvesI obtained a strange ROC curve using the evaluate.py script of Bob. The weird thing is the fact the curve doesn't cross the upper right corner (that is the 100%-100% coordinate) and it should be like this. Oleg and me already looked for ...I obtained a strange ROC curve using the evaluate.py script of Bob. The weird thing is the fact the curve doesn't cross the upper right corner (that is the 100%-100% coordinate) and it should be like this. Oleg and me already looked for the source of the issue and we think the problem is coming from the FAR and CAR computation. In our case, we have scores going from 0.0 to some positive value (1.0 being the maximum) and the higher, the more likely we match. We didn't check but it seems like it happens some genuine scores are 0.0 and they are not count when the threshold is defined to 0.0, causing the decay in the CAR when the FAR is 100%. I don't know if there is the same problem to the coordinate 0%-0% since the plot is in log scale.
I could have reported the plot but I wasn't be able to share the score files. Just explain me how to proceed if needed.
[ROC_curves.pdf](/uploads/3d5898691624bf4757357aad42b626e9/ROC_curves.pdf)Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/23TypeError: underlying read() should have returned a bytes-like object, not 'str'2016-11-11T10:56:17ZAmir MOHAMMADITypeError: underlying read() should have returned a bytes-like object, not 'str'@mguenther your recent changes in bob.measure has broken the nightlies. Please investigate.
https://gitlab.idiap.ch/bob/bob.nightlies/builds/28023@mguenther your recent changes in bob.measure has broken the nightlies. Please investigate.
https://gitlab.idiap.ch/bob/bob.nightlies/builds/28023Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/22farfrr core dumps if one of the input sets (negatives or positives) is empty2017-02-01T15:01:27ZAndré Anjosfarfrr core dumps if one of the input sets (negatives or positives) is emptyJust noted this by accident. Is this the supposed behaviour? Maybe we should introduce a check at some point.Just noted this by accident. Is this the supposed behaviour? Maybe we should introduce a check at some point.André AnjosAndré Anjoshttps://gitlab.idiap.ch/bob/bob.measure/-/issues/21Four and Five column format must be specified by the user, although it could ...2017-04-09T14:34:54ZManuel Günthersiebenkopf@googlemail.comFour and Five column format must be specified by the user, although it could be automatically estimatedSo far, the user has two different functions to load/process score files, depending on the format. On the other hand, @amohammadi has implemented a way of automatically estimating the score file format: https://gitlab.idiap.ch/bob/bob.me...So far, the user has two different functions to load/process score files, depending on the format. On the other hand, @amohammadi has implemented a way of automatically estimating the score file format: https://gitlab.idiap.ch/bob/bob.measure/blob/master/bob/measure/load.py#L315, so that the user does not need to specify the format anymore. I think, we should provide generic functions for similar tasks, too, i.e., having:
* [ ] `bob.measure.load.scores(filename, ncolumns=None)`
* [ ] `bob.measure.load.split(filename, ncolumns=None)`
* [ ] `bob.measure.load.cmc(filename, ncolumns=None)`
These functions should use the same way of handling `ncolumns=None`.Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/20bin/bob_plot_cmc.py is not working as expected2016-11-07T16:47:36ZManuel Günthersiebenkopf@googlemail.combin/bob_plot_cmc.py is not working as expectedThe script to compute the open set recognition rate does not work as expected. When passing the `--rank 1` option using a perfectly separable , the output of the script is:
```
Recognition rate for score file /mgunther/bob/bob.nightlies...The script to compute the open set recognition rate does not work as expected. When passing the `--rank 1` option using a perfectly separable , the output of the script is:
```
Recognition rate for score file /mgunther/bob/bob.nightlies/src/bob.measure/bob/measure/data/scores-cmc-4col-open-set.txt is 0.00%
```
I think the reason is that in: https://gitlab.idiap.ch/bob/bob.measure/blob/master/bob/measure/script/plot_cmc.py#L93 we use the ``rank``as the second parameter, while in https://gitlab.idiap.ch/bob/bob.measure/blob/master/bob/measure/__init__.py#L129 it is the third parameter. However, just using the right parameter (using keyword parameter as in https://gitlab.idiap.ch/bob/bob.measure/blob/master/bob/measure/script/plot_cmc.py#L126) does not work either.
@andre.anjos :
I know that you like the `docopt` command line parser. I don't. I believe the `argparse` module is much more powerful as it automatically converts parameters to the desired data type, and automatically sets default options, if desired.
In `docopt` you have to specify all these by hand. Also, the handling of logging differs between other scripts (using `argparse`, where we have default functions to handle the logger setup in `bob.bore.log`) and yours, where you set the verbosity by hand (and you actually use different stages than in `bob.core.log`).https://gitlab.idiap.ch/bob/bob.measure/-/issues/19load_scores extremely memory hungry2021-06-18T12:58:49ZManuel Günthersiebenkopf@googlemail.comload_scores extremely memory hungryThe new implementation of score loading is memory hungry, as it stores the whole score file in memory. For large score files that have long `client_id`'s and `label`'s, this might easily be too much for a normal desktop machine.
To spli...The new implementation of score loading is memory hungry, as it stores the whole score file in memory. For large score files that have long `client_id`'s and `label`'s, this might easily be too much for a normal desktop machine.
To split the score file into positives and negatives, most of the information (for example, the `label`s) is completely irrelevan.
I remember that I have had this problem with an older version of `bob.measure`, and this is why I have implemented the score reading using a generator function (i.e., `yield`'ing the file line by line) instead of keeping all information of the score file at the same time.
I will provide a better alternative of the 'load_scores' function as a generator function, which does not store the whole score file in memory.Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/18ImportError: src/bob.measure/bob/measure/_library.so: undefined symbol: _ZSt2...2016-10-05T10:35:33ZAmir MOHAMMADIImportError: src/bob.measure/bob/measure/_library.so: undefined symbol: _ZSt24__throw_out_of_range_fmtPKczI compiled `bob.measure` using our conda env `2.4.0` on our idiap machines and after compilation it does not work.
```
Traceback (most recent call last):
File "bin/bob_compute_perf.py", line 22, in <module>
import bob.measure.s...I compiled `bob.measure` using our conda env `2.4.0` on our idiap machines and after compilation it does not work.
```
Traceback (most recent call last):
File "bin/bob_compute_perf.py", line 22, in <module>
import bob.measure.script.compute_perf
File "src/bob.measure/bob/measure/__init__.py", line 5, in <module>
from ._library import *
ImportError: src/bob.measure/bob/measure/_library.so: undefined symbol: _ZSt24__throw_out_of_range_fmtPKcz
```
@andre.anjos is this an ABI error?Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/17evaluate.py segfaults with no error2017-02-01T15:01:27ZAmir MOHAMMADIevaluate.py segfaults with no errorI realized that when one of the score files are empty, the `evaluate.py` from `bob.bio.base` segfaults with no error. But the segfaults probably comes from here.I realized that when one of the score files are empty, the `evaluate.py` from `bob.bio.base` segfaults with no error. But the segfaults probably comes from here.Tiago de Freitas PereiraTiago de Freitas Pereirahttps://gitlab.idiap.ch/bob/bob.measure/-/issues/16The script `bob_compute_perf.py` command line interface could be improved2016-09-29T16:08:54ZAndré AnjosThe script `bob_compute_perf.py` command line interface could be improved*Created by: anjos*
As it is today, this script requires 2 score sets for running (dev and test).
It is possible to compute the performance using a single set.
Here are some suggestions for improvements:
1. Make the `dev` score...*Created by: anjos*
As it is today, this script requires 2 score sets for running (dev and test).
It is possible to compute the performance using a single set.
Here are some suggestions for improvements:
1. Make the `dev` score set an argument instead of an option. It is strange conceptually to have obligatory "options"
2. Make the `test` score set an optional argument (this implies in allowing the analysis to run if only `dev` is providedhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/15Adding score distribution plot feature to compute_perf.py2016-09-29T16:08:54ZAndré AnjosAdding score distribution plot feature to compute_perf.py*Created by: akomaty*
*Created by: akomaty*
Alain KOMATYAlain KOMATYhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/14ERROR: bob.measure.test_scripts.test_compute_cmc2019-06-27T08:05:49ZAndré AnjosERROR: bob.measure.test_scripts.test_compute_cmc*Created by: 183amir*
```
ERROR: bob.measure.test_scripts.test_compute_cmc
----------------------------------------------------------------------
Traceback (most recent call last):
File "/idiap/group/torch5spro/conda/envs/bob-2.3....*Created by: 183amir*
```
ERROR: bob.measure.test_scripts.test_compute_cmc
----------------------------------------------------------------------
Traceback (most recent call last):
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/bob/measure/test_scripts.py", line 78, in test_compu
te_cmc
nose.tools.eq_(main(['--self-test', '--score-file', SCORES_4COL_CMC_OS, '--rank', '1']), 0)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/bob/measure/script/plot_cmc.py", line 104, in main
pp.savefig(fig)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/backends/backend_pdf.py", line 2473, in s
avefig
figure.savefig(self, format='pdf', **kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/figure.py", line 1565, in savefig
self.canvas.print_figure(*args, **kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 2232, in print_fi
gure
**kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/backends/backend_pdf.py", line 2536, in p
rint_pdf
self.figure.draw(renderer)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/artist.py", line 61, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/figure.py", line 1159, in draw
func(*args)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/artist.py", line 61, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/axes/_base.py", line 2324, in draw
a.draw(renderer)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/artist.py", line 61, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/axis.py", line 1120, in draw
self.label.draw(renderer)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/artist.py", line 61, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/text.py", line 749, in draw
bbox, info, descent = textobj._get_layout(renderer)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/backends/backend_pdf.py", line 2[25/3512]
et_text_width_height_descent
renderer=self)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/texmanager.py", line 675, in get_text_wid
th_height_descent
dvifile = self.make_dvi(tex, fontsize)
File "/idiap/group/torch5spro/conda/envs/bob-2.3.4-py27_0/lib/python2.7/site-packages/matplotlib/texmanager.py", line 422, in make_dvi
report))
RuntimeError: LaTeX was not able to process the following string:
'Detection & Identification Rate in %'
Here is the full report generated by LaTeX:
This is pdfTeX, Version 3.14159265-2.6-1.40.15 (TeX Live 2015/dev/Debian) (preloaded format=latex)
restricted \write18 enabled.
entering extended mode
(./3aed1ad7c64b95333ba5fffb3a1d7a66.tex
LaTeX2e <2014/05/01>
Babel <3.9l> and hyphenation patterns for 2 languages loaded.
(/usr/share/texlive/texmf-dist/tex/latex/base/article.cls
Document Class: article 2014/09/29 v1.4h Standard LaTeX document class
(/usr/share/texlive/texmf-dist/tex/latex/base/size10.clo))
(/usr/share/texlive/texmf-dist/tex/latex/type1cm/type1cm.sty)
(/usr/share/texlive/texmf-dist/tex/latex/psnfss/helvet.sty
(/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty))
(/usr/share/texlive/texmf-dist/tex/latex/psnfss/courier.sty)
(/usr/share/texlive/texmf-dist/tex/latex/base/textcomp.sty
(/usr/share/texlive/texmf-dist/tex/latex/base/ts1enc.def))
(/usr/share/texlive/texmf-dist/tex/latex/geometry/geometry.sty
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifpdf.sty)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifvtex.sty)
(/usr/share/texlive/texmf-dist/tex/generic/ifxetex/ifxetex.sty)
Package geometry Warning: Over-specification in `h'-direction.
`width' (5058.9pt) is ignored.
Package geometry Warning: Over-specification in `v'-direction.
`height' (5058.9pt) is ignored.
)
No file 3aed1ad7c64b95333ba5fffb3a1d7a66.aux.
(/usr/share/texlive/texmf-dist/tex/latex/base/ts1cmr.fd)
(/usr/share/texlive/texmf-dist/tex/latex/psnfss/ot1pnc.fd)
*geometry* driver: auto-detecting
*geometry* detected driver: dvips
! Misplaced alignment tab character &.
l.12 ...8.000000}{22.500000}{\rmfamily Detection &
Identification Rate in %}
[1] (./3aed1ad7c64b95333ba5fffb3a1d7a66.aux) )
(\end occurred inside a group at level 1)
### simple group (level 1) entered at line 12 ({)
### bottom level
(see the transcript file for additional information)
Output written on 3aed1ad7c64b95333ba5fffb3a1d7a66.dvi (1 page, 276 bytes).
Transcript written on 3aed1ad7c64b95333ba5fffb3a1d7a66.log.
```Tiago de Freitas PereiraTiago de Freitas Pereirahttps://gitlab.idiap.ch/bob/bob.measure/-/issues/13Cannot install bob.measure2016-08-04T09:19:46ZAndré AnjosCannot install bob.measure*Created by: omarcr*
I have installed all the dependencies acording to the graph presented in https://github.com/idiap/bob/wiki/Dependencies:
However, I cannot isntall the pacakge this is the traceback:
```
omar@ubuntuv2:~/bob.me...*Created by: omarcr*
I have installed all the dependencies acording to the graph presented in https://github.com/idiap/bob/wiki/Dependencies:
However, I cannot isntall the pacakge this is the traceback:
```
omar@ubuntuv2:~/bob.measure$ sudo python setup.py
Traceback (most recent call last):
File "setup.py", line 50, in <module>
boost_modules = boost_modules,
File "/usr/local/lib/python2.7/dist-packages/bob/blitz/extension.py", line 52, in __init__
BobExtension.__init__(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/bob/extension/__init__.py", line 294, in __init__
bob_includes, bob_libraries, bob_library_dirs, bob_macros = get_bob_libraries(self.bob_packages)
File "/usr/local/lib/python2.7/dist-packages/bob/extension/__init__.py", line 186, in get_bob_libraries
pkg = importlib.import_module(package)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/bob/math/__init__.py", line 6, in <module>
bob.extension.load_bob_library('bob.math', __file__)
File "/usr/local/lib/python2.7/dist-packages/bob/extension/__init__.py", line 237, in load_bob_library
ctypes.cdll.LoadLibrary(full_libname)
File "/usr/lib/python2.7/ctypes/__init__.py", line 443, in LoadLibrary
return self._dlltype(name)
File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /usr/local/lib/python2.7/dist-packages/bob/math/libbob_math.so: undefined symbol: dsyevd_
```https://gitlab.idiap.ch/bob/bob.measure/-/issues/10Open set test are failing2016-09-29T16:08:53ZAndré AnjosOpen set test are failing*Created by: siebenkopf*
Hi Tiago,
I have just pushed a corrected version of the open set recognition rate tests, which have been implemented here: d7f1095a9c8c44207965a65d0a66bcc2e17c9fa7 but in a wrong way.
Unfortunately, the test...*Created by: siebenkopf*
Hi Tiago,
I have just pushed a corrected version of the open set recognition rate tests, which have been implemented here: d7f1095a9c8c44207965a65d0a66bcc2e17c9fa7 but in a wrong way.
Unfortunately, the tests are failing now. Could you please check that?