bob issueshttps://gitlab.idiap.ch/groups/bob/-/issues2017-08-07T12:16:54Zhttps://gitlab.idiap.ch/bob/bob/-/issues/227Wiki on Dependencies and Build needs a refresh after move to Conda2017-08-07T12:16:54ZAndré AnjosWiki on Dependencies and Build needs a refresh after move to CondaIt would be great to update our "dependencies" and "build" wikis to remove all unsupported distributions and setup the Conda environment as we do for the `bob-devel-*` ones.It would be great to update our "dependencies" and "build" wikis to remove all unsupported distributions and setup the Conda environment as we do for the `bob-devel-*` ones.Refactoring 2016 and gitlab migration milestoneAmir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob/-/issues/226Docker images for Linux builds2018-01-17T09:56:55ZAndré AnjosDocker images for Linux buildsWe should migrate our linux builds to docker images. It will give us better flexibility, simpler builds and re-usable wheels across distributions. Everything we need from the system admins is in place. The next big job is to build the ba...We should migrate our linux builds to docker images. It will give us better flexibility, simpler builds and re-usable wheels across distributions. Everything we need from the system admins is in place. The next big job is to build the base docker image, which should be based on an old Cent OS 5 distribution (for compatibility):
- Base docker container: https://github.com/pypa/manylinux
- Instructions for the runners: http://docs.gitlab.com/ce/ci/docker/using_docker_images.html
On the top of that, we should install our Conda devevelopment environment as from snippet $6.Conda-based CIAndré AnjosAndré Anjoshttps://gitlab.idiap.ch/bob/bob/-/issues/225Gitlab and CI migration (instructions)2017-08-07T12:16:54ZAmir MOHAMMADIGitlab and CI migration (instructions)Let's gather some info here and also see what needs to be done.Let's gather some info here and also see what needs to be done.Refactoring 2016 and gitlab migration milestoneAmir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob/-/issues/195Post DCT normalization is not correct with 3D outputs2016-08-04T09:33:00ZAndré AnjosPost DCT normalization is not correct with 3D outputs*Created by: laurentes*
As noticed by Elie, there is a difference in the outputs values when we select resp. 2D and 3D outputs when the option post DCT normalization is selected for the DCT extraction:
```python
>>> import bob, nump...*Created by: laurentes*
As noticed by Elie, there is a difference in the outputs values when we select resp. 2D and 3D outputs when the option post DCT normalization is selected for the DCT extraction:
```python
>>> import bob, numpy, numpy.random
>>> a = numpy.random.randn(10,10)
>>> o = bob.ip.DCTFeatures(3,3,1,1,5,True,False,False)
>>> (o(a, False).flatten() == o(a, True).flatten()).all()
True
>>> o = bob.ip.DCTFeatures(3,3,1,1,5,True,True,False)
>>> (o(a, False).flatten() == o(a, True).flatten()).all()
False
```
The blitz reduction seems to be wrong for the 3D case in https://github.com/idiap/bob/blob/1.2/src/ip/cxx/DCTFeatures.cc (last ten lines of code). v2.0https://gitlab.idiap.ch/bob/bob/-/issues/183-DWITH_PERFTOOLS option does not work2019-04-19T22:41:25ZAndré Anjos-DWITH_PERFTOOLS option does not work*Created by: laurentes*
It seems that this option does not work anymore on the master branch.
I don't know yet if this also affect the 1.2 branch.
The problems seems to be caused by the use of WITH_PERFTOOLS as a C-like defined vari...*Created by: laurentes*
It seems that this option does not work anymore on the master branch.
I don't know yet if this also affect the 1.2 branch.
The problems seems to be caused by the use of WITH_PERFTOOLS as a C-like defined variable, whereas this is initially a cmake variable.
The easiest solution is to perform the inclusion check at the cmake level rather than by the C preprocessor. A good example is what was done for libsvm.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/182Regression using SVM is not supported2016-08-04T09:32:36ZAndré AnjosRegression using SVM is not supported*Created by: laurentes*
We should either update the documentation and remove options such as NU_SVR, or support the regression task correctly.*Created by: laurentes*
We should either update the documentation and remove options such as NU_SVR, or support the regression task correctly.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/179No support of log determinant2014-01-04T16:02:46ZAndré AnjosNo support of log determinant*Created by: laurentes*
As said [here](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.det.html), determinant computation is subject to underflow/overflow.
When computing log determinant, this might be avoided by dir...*Created by: laurentes*
As said [here](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.det.html), determinant computation is subject to underflow/overflow.
When computing log determinant, this might be avoided by directly working in the log domain. In the python universe, we may rely on numpy.linalg.slogdet(). We should consider to provide such a function in the C++ universe.
Currently, the PLDAMachine class may be subject to underflow/overflow.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/176Shift to a non-GPL library for FFT/DCT computation2016-08-04T09:32:26ZAndré AnjosShift to a non-GPL library for FFT/DCT computation*Created by: laurentes*
Bob (1.2.x) currently relies on FFTW for FFT/DCT computation. FFTW has a GPL license. We are now considering to turn the license of Bob from GPL to BSD. This would imply that we should not link any more against G...*Created by: laurentes*
Bob (1.2.x) currently relies on FFTW for FFT/DCT computation. FFTW has a GPL license. We are now considering to turn the license of Bob from GPL to BSD. This would imply that we should not link any more against GPL libraries. FFTW is the only GPL dependence that we have.
Furthermore, we are looking for alternatives to FFTW. There are already naive implementations of DFT/DCT in bob, which are used for testing purposes. But there are really slow for large arrays. We are hence looking for more optimized source code. I have performed few tests to rely on two different BSD-like FFT libraries:
1. Kiss FFT (C and C++ implementation): I was not able to make the C implementation working with 'double' instead of default 'float'. This just provides wrong outputs. And the documentation is quite poor. The C++ implementation is working with 'double', but it does only support 1D FFT (No nD FFT or DCT computation). However, I still had to tweak/fix the code to make it compatible with all the platforms we are supporting.
2. NumPy's FFT implementation (C code based on former FFTPACK fortran's implementation [P.S.: Note that SciPy also provides a different FFT implementation based on the original FFTPACK fortran's code]:. NumPy's implementation only provide FFT (not DCT). The code is much larger than the one of kiss FFT (2k lines vs 200 lines), but is probably more reliable, since it has been used for several years by this wide deployed library.
For both solutions, we won't add a new dependence, but instead we would just cannibalize the FFT source code into bob central repository (There is no Ubuntu/OS X packages for kiss FFT any way). I have pushed FFT/DCT implementations in the master branch that rely on both libraries (separately), to keep track of all the tests I did. Solution 2. is my favourite so far. If we go for it, I will just remove the FFTW and kiss FFT-based implementations, and renamed the FFT1DNumpy (2D/DCT, etc.) classes into FFT1D. The documentation should then be carefully updated. As the underlying implementation's will be different, this may slightly affect the outputs/features/results generated with FFTW.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/174bob.ip.draw_... methods take arguments in wrong order2013-11-15T19:55:28ZAndré Anjosbob.ip.draw_... methods take arguments in wrong order*Created by: siebenkopf*
Usually, points in Bob are given in (y,x) order, and functions always take (y,x) as arguments (in this order). Having a look at the documentation of the bob.ip.draw_point (and similar) functions, they take argum...*Created by: siebenkopf*
Usually, points in Bob are given in (y,x) order, and functions always take (y,x) as arguments (in this order). Having a look at the documentation of the bob.ip.draw_point (and similar) functions, they take arguments in order (x,y).
A fix of this would be nice, to have a consistent order of arguments in Bob.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/171SVD is failing on some matrices because of LAPACK dgesdd2016-08-04T09:32:20ZAndré AnjosSVD is failing on some matrices because of LAPACK dgesdd*Created by: laurentes*
The svd implementation is failing on some matrices, no convergence occuring. The sample matrix is available [here](http://www.idiap.ch/~ichingo/to_download/data_svd_fail.hdf5).
Similarly to NumPy, bob relies o...*Created by: laurentes*
The svd implementation is failing on some matrices, no convergence occuring. The sample matrix is available [here](http://www.idiap.ch/~ichingo/to_download/data_svd_fail.hdf5).
Similarly to NumPy, bob relies on the LAPACK function called dgesdd, as recommended by Netlib maintainers [here](https://groups.google.com/forum/#!topic/julia-dev/mmgO65i6-fA). Using NumPy leads to the exact same issue, whereas matlab implementation seems to work. This is a known problem previously reported [here](http://mail.scipy.org/pipermail/numpy-discussion/2009-February/040212.html). It seems that the slower alternative function from LAPACK called dgesvd is not affected by this problem. One possible solution would be to support both functions.
v2.0https://gitlab.idiap.ch/bob/bob/-/issues/170mincllr calibration code crashing with list index out of range2019-04-19T22:41:24ZAndré Anjosmincllr calibration code crashing with list index out of range*Created by: khoury*
In the file python/bob/measure/calibration.py, when the `p` list index of the list `pos` reached the value `P` (length of the `pos` list), the conditional test
```python
if n == N or neg[n] > pos[p]:
```
will ...*Created by: khoury*
In the file python/bob/measure/calibration.py, when the `p` list index of the list `pos` reached the value `P` (length of the `pos` list), the conditional test
```python
if n == N or neg[n] > pos[p]:
```
will crash as follows:
```python
Traceback (most recent call last):
...
min_cllr = bob.measure.calibration.min_cllr(scores_dev[i][0], scores_dev[i][1])
File "/usr/lib/python2.7/site-packages/bob/measure/calibration.py", line 51, in min_cllr
if (n == N or neg[n] > pos[p]):
IndexError: list index out of range
```
A solution seems to be:
```python
if not (p == P) and (n == N or neg[n] > pos[p]):
```v2.0https://gitlab.idiap.ch/bob/bob/-/issues/169bob.core.random has many classes for different data types2016-08-04T09:32:16ZAndré Anjosbob.core.random has many classes for different data types*Created by: siebenkopf*
Having a look at the bob.core.random module, I can find several bindings of classes for different data types. Instead of having all these classes, I would suggest two solutions:
1. We have one class for each ...*Created by: siebenkopf*
Having a look at the bob.core.random module, I can find several bindings of classes for different data types. Instead of having all these classes, I would suggest two solutions:
1. We have one class for each distribution type and a dtype-like parameter for the constructor.
2. We have only one class *overall*, having the dtype and the distribution type as parameters.
Either of the solutions will break the API, but I think we should avoid these data type specific classes and functions. In C++, these classes are templated, anyways...v2.0https://gitlab.idiap.ch/bob/bob/-/issues/167PLDA machine save and load problem2019-04-19T22:41:24ZAndré AnjosPLDA machine save and load problem*Created by: zongyuange*
Hi Laurent,
It is me again. I have found an issue with PLDA machine.
After a bob.machine.PLDAMachine is trained, I saved it with
save_file = bob.io.HDF5File('/home/test.hdf5','w');
plda_machine.save(sa...*Created by: zongyuange*
Hi Laurent,
It is me again. I have found an issue with PLDA machine.
After a bob.machine.PLDAMachine is trained, I saved it with
save_file = bob.io.HDF5File('/home/test.hdf5','w');
plda_machine.save(save_file)
when I am trying to load it again with
file = bob.io.HDF5File('/home/test.hdf5','r');
plda = bob.machine.PLDAMachine();
plda.load(file)
Then I typed the command
'plda.dim_d '
will cause segmentation error.
You can try it with the any example.
Thanks,
Rehards,
ZongYuan
v2.0https://gitlab.idiap.ch/bob/bob/-/issues/163ML_GMMTrainer and MAP_GMMTrainer documentation do not show defaults2019-07-16T14:50:50ZAndré AnjosML_GMMTrainer and MAP_GMMTrainer documentation do not show defaults*Created by: anjos*
This should be an easy fix on the bindings. Please make sure to correctly place the defaults for the input parameters there. As of today, these are not displayed:
http://www.idiap.ch/software/bob/docs/nightlies/la...*Created by: anjos*
This should be an easy fix on the bindings. Please make sure to correctly place the defaults for the input parameters there. As of today, these are not displayed:
http://www.idiap.ch/software/bob/docs/nightlies/last/bob/sphinx/html/trainer/generated/bob.trainer.ML_GMMTrainer.html?highlight=gmmtrainer#bob.trainer.ML_GMMTrainer
http://www.idiap.ch/software/bob/docs/nightlies/last/bob/sphinx/html/trainer/generated/bob.trainer.MAP_GMMTrainer.html?highlight=gmmtrainer#bob.trainer.MAP_GMMTrainer
Furthermore, it would be interesting to have somewhere the defaults for the inherited classes as well, such as, for example, the EMTrainer.max_iterations parameter. Finding the default (which is 10) can be a daunting quest for any person.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/161IP Scaling functionality far from optimal2019-07-16T14:50:50ZAndré AnjosIP Scaling functionality far from optimal*Created by: anjos*
The documentation and workflow of `bob.ip.scale` and `bob.ip.scale_as` as pretty bad as they currently stand.
1. If I read the doc of `scale_as`, I get the impression it will scale the input image, but actually it...*Created by: anjos*
The documentation and workflow of `bob.ip.scale` and `bob.ip.scale_as` as pretty bad as they currently stand.
1. If I read the doc of `scale_as`, I get the impression it will scale the input image, but actually it just generates a container for one.
2. The doc of `scale` does not precise the types of inputs it can handle, nor the type it outputs.
3. There is no easy handle to scale an image and return a freshly allocated container. I currently have to pass through `scale_as` to get a container and then be able to call `scale`.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/160Build should fail if Blitz++ <0.10 is detected2016-08-04T09:31:58ZAndré AnjosBuild should fail if Blitz++ <0.10 is detected*Created by: anjos*
There seems to be on guard in case an older version of Blitz is installed at the system. This can be easily fixed by a pkg-config check.*Created by: anjos*
There seems to be on guard in case an older version of Blitz is installed at the system. This can be easily fixed by a pkg-config check.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/159PNG codec does not support image with indexed color2019-04-19T22:41:24ZAndré AnjosPNG codec does not support image with indexed color*Created by: matthiass2*
Given the attached PNG image with indexed color, the bob codec can't read it when using bob 1.2.0:
![image](https://f.cloud.github.com/assets/5189100/960468/362936be-04b8-11e3-8c57-18b69b4ee351.png)
```pyt...*Created by: matthiass2*
Given the attached PNG image with indexed color, the bob codec can't read it when using bob 1.2.0:
![image](https://f.cloud.github.com/assets/5189100/960468/362936be-04b8-11e3-8c57-18b69b4ee351.png)
```python
import bob
img=bob.io.load('image.png')
```
The error message is:
"RuntimeError: png codec does not support images with color spaces different than GRAY or RGB"v2.0https://gitlab.idiap.ch/bob/bob/-/issues/158The train() method of MAP_GMMTrainer segfaults when the prior is not set2016-08-04T09:31:53ZAndré AnjosThe train() method of MAP_GMMTrainer segfaults when the prior is not set*Created by: laurentes*
As reported on the mailing list, the train() method of the MAP_GMMTrainer causes a segmentation fault, when the prior GMM distribution is not set. There is a need to add a check, and to raises a runtime exception...*Created by: laurentes*
As reported on the mailing list, the train() method of the MAP_GMMTrainer causes a segmentation fault, when the prior GMM distribution is not set. There is a need to add a check, and to raises a runtime exception if this situation occurs.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/157Python 3 support2016-08-04T09:31:51ZAndré AnjosPython 3 support*Created by: anjos*
Python 3 is already on version 3.4 (an alpha was just released). It is available on MacPorts. It is *already* the default python in ArchLinux. [It will be the default one in Ubuntu as of 14.04](https://wiki.ubuntu.co...*Created by: anjos*
Python 3 is already on version 3.4 (an alpha was just released). It is available on MacPorts. It is *already* the default python in ArchLinux. [It will be the default one in Ubuntu as of 14.04](https://wiki.ubuntu.com/Python/3) (that is April/2014).
We should slowly try to get the code compatible and ported. Unfortunately, Python 3 is not fully backward compatible with Python 2. So, let's keep in this bug report overall guidelines for porting the code. The idea is that, as much as possible, we try to keep the code in such a way that it is **valid in both Python 2 and Python 3**. In cases where that would not be possible, we may have to temporarily (until there is only Python 3) introduce `if` switches.
This work will start with Bob, but should soon propagate to the satellite packages. All help is welcome.
[Here is a quick guide of changes to get you started](http://docs.pythonsprints.com/python3_porting/py-porting.html).
I'll create an externals environment and post instructions for compiling against Python 3 at Idiap soon.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/156Build should fail if pkg-config is not installed2019-04-19T22:41:24ZAndré AnjosBuild should fail if pkg-config is not installed*Created by: anjos*
The build should fail if CMake cannot find pkg-config:
```
-- Bob version '1.2.0' (macosx-x86_64-release)
-- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE
```*Created by: anjos*
The build should fail if CMake cannot find pkg-config:
```
-- Bob version '1.2.0' (macosx-x86_64-release)
-- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE
```v2.0