bob issueshttps://gitlab.idiap.ch/bob/bob/-/issues2016-08-04T09:27:53Zhttps://gitlab.idiap.ch/bob/bob/-/issues/34inconsistent function naming convention2016-08-04T09:27:53ZAndré Anjosinconsistent function naming convention*Created by: siebenkopf*
Currently, there are different ways, python functions are called. One type uses CamelCase, whereas other types use under_score_names. Since the default python ways seems to be under_scores, we should rename the ...*Created by: siebenkopf*
Currently, there are different ways, python functions are called. One type uses CamelCase, whereas other types use under_score_names. Since the default python ways seems to be under_scores, we should rename the camelCase functions in the API.v1.0https://gitlab.idiap.ch/bob/bob/-/issues/30Order of the eigenvalues with FisherLDA2016-08-04T09:27:43ZAndré AnjosOrder of the eigenvalues with FisherLDA*Created by: laurentes*
Following the introduction of the doctest, it seems that the behaviour of our FisherLDA implementation is currently platform dependent: the order of the returned eigenvalues and eigenvectors differs from one plat...*Created by: laurentes*
Following the introduction of the doctest, it seems that the behaviour of our FisherLDA implementation is currently platform dependent: the order of the returned eigenvalues and eigenvectors differs from one platform to another one. This might be caused by a different behaviour of the LAPACK implementations. We need to fix the problem ASAP.v1.0https://gitlab.idiap.ch/bob/bob/-/issues/29bob.ip.Rotate needs a review2016-08-04T09:27:41ZAndré Anjosbob.ip.Rotate needs a review*Created by: siebenkopf*
The bob.ip.Rotate class needs a mayor revision. First, it expects the rotation angle in degrees (which is rather uncommon), but this is documented nowhere. The resulting rotated image is assumed to be of type fl...*Created by: siebenkopf*
The bob.ip.Rotate class needs a mayor revision. First, it expects the rotation angle in degrees (which is rather uncommon), but this is documented nowhere. The resulting rotated image is assumed to be of type float, even when the input image is uint8. I am not sure if this behavior is intended.
Furthermore, its function bob.ip.Rotate.getOutputShape(image,angle) does not use the angle member variable, but requires the angle to be specified again. There are two solutions for this last problem. Either the class is removed and only the two functions bob.ip.rotate(image,image,angle) and bob.ip.getRotatedShape(image,angle) are preserved, or the getRotatedShape(image) function uses the angle member variable. I vote for the first solution since this would mimic the bob.ip.scale(image,image,scale) function.
v1.0https://gitlab.idiap.ch/bob/bob/-/issues/25cblas, clapack and LAPACK2016-08-04T09:27:32ZAndré Anjoscblas, clapack and LAPACK*Created by: laurentes*
In the early years, we've got rid of the usage of C functions from cblas and clapack, calling the Fortran functions from LAPACK directly. This is not reflected in the external/cblasConfig.cmake, which still requi...*Created by: laurentes*
In the early years, we've got rid of the usage of C functions from cblas and clapack, calling the Fortran functions from LAPACK directly. This is not reflected in the external/cblasConfig.cmake, which still requires cblas.h and clapack.h to exist.v1.0https://gitlab.idiap.ch/bob/bob/-/issues/18ZT Norm returns NaN when the standard deviation of the scores is equal to 02016-08-04T09:27:19ZAndré AnjosZT Norm returns NaN when the standard deviation of the scores is equal to 0*Created by: laurentes*
The current ZT-Norm implementation doesn't check whether the standard deviation of the input scores (B, C, and D matrices of scores) is 0 or not. This leads to NaN normalized scores. We need to introduce some fur...*Created by: laurentes*
The current ZT-Norm implementation doesn't check whether the standard deviation of the input scores (B, C, and D matrices of scores) is 0 or not. This leads to NaN normalized scores. We need to introduce some further checks to deal with the problemv1.0https://gitlab.idiap.ch/bob/bob/-/issues/17Database documentation should include expected filesystem mapping2016-08-04T09:27:17ZAndré AnjosDatabase documentation should include expected filesystem mapping*Created by: anjos*
In order go make our current DB API and practices useful, we should precisely document what is the expected layout expected by each database in Bob.*Created by: anjos*
In order go make our current DB API and practices useful, we should precisely document what is the expected layout expected by each database in Bob.v1.0https://gitlab.idiap.ch/bob/bob/-/issues/75IP module consolidation2016-08-04T09:29:09ZAndré AnjosIP module consolidation*Created by: laurentes*
As discussed on the mailing list previously, there is a need to consolidate some parts of the code. In particular, the Image Processing module (IP) is one of the largest one and contains many algorithm ports from...*Created by: laurentes*
As discussed on the mailing list previously, there is a need to consolidate some parts of the code. In particular, the Image Processing module (IP) is one of the largest one and contains many algorithm ports from late torch5spro and others.
I've started this consolidation procedure. However, this is quite a large work.
This ticket aims at tracking the consolidation procedure of this IP module, with remarks and TODOs. Feel free to add your feature request.v1.1https://gitlab.idiap.ch/bob/bob/-/issues/72Gaussian filter is using Variance/2 instead of Variance2016-08-04T09:29:05ZAndré AnjosGaussian filter is using Variance/2 instead of Variance*Created by: laurentes*
There is an invalid parametrization in the Gaussian filter of the IP module. A factor 1/2 is missing is the argument of the exponential, which means that the variance given as a parameter of the filter correspond...*Created by: laurentes*
There is an invalid parametrization in the Gaussian filter of the IP module. A factor 1/2 is missing is the argument of the exponential, which means that the variance given as a parameter of the filter corresponds to the variance/2.
This can be easily fixed, but the main issue is that this class is used by the TanTriggs preprocessing. This means that if the fix is applied, the user SHOULD divide the provided variance by 2 to keep getting the same results. Therefore, I don't know what would be the most suitable option.v1.1https://gitlab.idiap.ch/bob/bob/-/issues/54LBPTopOperator is untested and seems to be buggy2016-08-04T09:28:38ZAndré AnjosLBPTopOperator is untested and seems to be buggy*Created by: laurentes*
The current LBPTopOperator implementation is a port of some older torch5spro code. Unfortunately, it suffers from the following problems (non-exhaustive list):
- No unit test!
- Does not seem to work!
- Poor pyth...*Created by: laurentes*
The current LBPTopOperator implementation is a port of some older torch5spro code. Unfortunately, it suffers from the following problems (non-exhaustive list):
- No unit test!
- Does not seem to work!
- Poor python bindings (no helper functions to get information about the expected size of the output)
- Generic C++ core exceptions rather than specialized ones
- Few check on the input/output array size are performed
This requires some serious revision.v1.1https://gitlab.idiap.ch/bob/bob/-/issues/52IP Block - Get output shape - Two different functions for similar purpose2016-08-04T09:28:34ZAndré AnjosIP Block - Get output shape - Two different functions for similar purpose*Created by: laurentes*
I've just noticed that there are two different functions for similar purpose in the IP block decomposition code. To get the expected number of blocks given an input array and some parameters, both `getBlockShape(...*Created by: laurentes*
I've just noticed that there are two different functions for similar purpose in the IP block decomposition code. To get the expected number of blocks given an input array and some parameters, both `getBlockShape()` and `getNBlocks()` could be used.
For the next release, we should remove one of them and update the bits which rely on these functions accordingly (at the C++ level, `DCTFeatures` and `LBPHSFeatures`).v1.1https://gitlab.idiap.ch/bob/bob/-/issues/235bob verification databases do not use the `original_directory` and `original_...2017-08-07T12:16:54ZManuel Günthersiebenkopf@googlemail.combob verification databases do not use the `original_directory` and `original_extension` parametersSorry that I saw this soo late, after the new database packages have been published already.
I think, during the reimplementation of the databases, something got lost. In the old `bob.db.verification.database.Database` interface, at lea...Sorry that I saw this soo late, after the new database packages have been published already.
I think, during the reimplementation of the databases, something got lost. In the old `bob.db.verification.database.Database` interface, at least two parameters were accepted: `original_directory` and `original_extension`, and there was a method called `original_file_names`, which was using these parameters.
Now, this functionality seems to be completely lost. For example, `bob.db.mobio` has no way of getting the original file names, i.e., the `original_directory` and `original_extension` are not stored in the database anymore. On the other hand, you can still specify these parameters in the constructor:
https://gitlab.idiap.ch/bob/bob.db.mobio/blob/master/bob/db/mobio/query.py#L40
but they are not used anywhere in the code.
I know that most of this functionality was moved to `bob.bio.base.database.BioDatabase`. Hence, I see two different ways of handling this:
> 1. Leave the implementation in `bob.bio.base` and remove the unused keywords in the `bob.db` Database constructors. In this way, the `bob.db` databases do not have the capability to query their original data files.
> 2. Move the functionality of the old `bob.db.verification.utils.Database` into `bob.db.base` (and remove it from `bob.bio.base`). In this way, the databases themselves know their original data.
In a similar manner, the `annotations` function inside the databases are arbitrary. When annotation files are read from file (for example in `bob.db.mobio`), an implementation is provided in `bob.bio.base.database.BioDatabase`: https://gitlab.idiap.ch/bob/bob.bio.base/blob/master/bob/bio/base/database/database.py#L265, as well as in `bob.db.mobio`: https://gitlab.idiap.ch/bob/bob.db.mobio/blob/master/bob/db/mobio/query.py#L602, both of which use the same basic functionality: https://gitlab.idiap.ch/bob/bob.db.base/blob/master/bob/db/base/annotations.py#L35
Hence, to be consistent with option 1. above, we would probably want to *remove* this functionality from `bob.db.mobio`. In fact, in `bob.bio.face`, the `annotations` functionality inside `bob.db.mobio` is not used at all.
On the other hand, there are databases, which store the annotations internally, such as `bob.db.gbu`: https://gitlab.idiap.ch/bob/bob.db.gbu/blob/master/bob/db/gbu/models.py#L51 Hence, for these databases, the `bob.bio.base.database.BioDatabase.annotations`:https://gitlab.idiap.ch/bob/bob.bio.base/blob/master/bob/bio/base/database/database.py#L265 functions need to be overwritten, i.e., in order to use the annotations from those databases. However, I cannot see this happening, e.g., in `bob.bio.face.database.GBUBioDatabase` https://gitlab.idiap.ch/bob/bob.bio.face/blob/master/bob/bio/face/database/gbu.py#L16
Hence, for these databases there is currently **no way** to obtain the annotations from the original `bob.db` databases. Again, there are two solutions:
> A. provide a default implementation for these cases in `bob.bio.base.database.BioDatabase.annotations`, i.e., by checking if the low-level database has an `annotations` function.
> B. Provide these implementations in all derived classes from `BioDatabase`, where the low-level database has annotations stored internally.
I can check, which of the `bob.db` databases are affected and open according issues there. But first, we have to decide, which way to go. I personally would vote for options `1.` and `A.`, as they would require the least modifications, But I can also see the benefits of options `2.` and `B.`, which require more work, as `2.` would add more information to the low-level `bob.db` databases, and `B.` would be cleaner.
@amohammadi @andre.anjos @tiago.pereira @sebastien.marcel What is your opinion? Did I miss something here? Is `bob.db.gbu` (and others) really currently not working?May 2017 Hackathonhttps://gitlab.idiap.ch/bob/bob/-/issues/184bob.sp.Quantization has weird border handling2015-08-18T15:57:28ZAndré Anjosbob.sp.Quantization has weird border handling*Created by: siebenkopf*
By chance, I had a look at the ``bob.sp.Quantization`` class. It seems that this class has several issues, especially in border cases:
1. the __call__ function returns 0 in two cases: when the element is in the...*Created by: siebenkopf*
By chance, I had a look at the ``bob.sp.Quantization`` class. It seems that this class has several issues, especially in border cases:
1. the __call__ function returns 0 in two cases: when the element is in the first range, **or** when the element is below the lowest threshold
2. the __call__ function returns the highest index in two cases: when the element is in the last range, **or** when the element is above the highest threshold
In fact, point (2) cannot even be distinguished in the C++ implementation of the function since the highest threshold in not even stored in the range of thresholds. Usually, when there are 4 ranges, it requires 5 thresholds, but this class holds only 4.https://gitlab.idiap.ch/bob/bob/-/issues/183-DWITH_PERFTOOLS option does not work2019-04-19T22:41:25ZAndré Anjos-DWITH_PERFTOOLS option does not work*Created by: laurentes*
It seems that this option does not work anymore on the master branch.
I don't know yet if this also affect the 1.2 branch.
The problems seems to be caused by the use of WITH_PERFTOOLS as a C-like defined vari...*Created by: laurentes*
It seems that this option does not work anymore on the master branch.
I don't know yet if this also affect the 1.2 branch.
The problems seems to be caused by the use of WITH_PERFTOOLS as a C-like defined variable, whereas this is initially a cmake variable.
The easiest solution is to perform the inclusion check at the cmake level rather than by the C preprocessor. A good example is what was done for libsvm.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/179No support of log determinant2014-01-04T16:02:46ZAndré AnjosNo support of log determinant*Created by: laurentes*
As said [here](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.det.html), determinant computation is subject to underflow/overflow.
When computing log determinant, this might be avoided by dir...*Created by: laurentes*
As said [here](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.det.html), determinant computation is subject to underflow/overflow.
When computing log determinant, this might be avoided by directly working in the log domain. In the python universe, we may rely on numpy.linalg.slogdet(). We should consider to provide such a function in the C++ universe.
Currently, the PLDAMachine class may be subject to underflow/overflow.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/178ROC and DET plots have wrong axis2016-08-04T09:32:30ZAndré AnjosROC and DET plots have wrong axis*Created by: siebenkopf*
I always found the way, ROC and DET plots are plotted using ``bob.measure.plot.roc`` and ``bob.measure.plot.det``, wrong. For some reason, someone has decided to plot the FRR axis in the abscissa and the FAR in ...*Created by: siebenkopf*
I always found the way, ROC and DET plots are plotted using ``bob.measure.plot.roc`` and ``bob.measure.plot.det``, wrong. For some reason, someone has decided to plot the FRR axis in the abscissa and the FAR in the ordinate. I have never seen plots like this before, and also the "Handbook of Biometrics" shows ROC and DET plots with the FAR in the abscissa (page 9). Also wikipedia pages show DET and ROC plots this way.
Normally, I would not consider the way the ROC is plotted as a bug. But in fact, I was using the function ``bob.measure.det``, and I blindly expected getting FAR and FRR in this order. Unfortunately, this was not the case, which lead my plots to be reverted (FAR was FRR and vice versa).
To be conform with the god of Biometrics, we should change both:
- the axis of the ROC / DET plots
- the order in which ``bob.measure.roc`` and ``bob.measure.det`` return the results.https://gitlab.idiap.ch/bob/bob/-/issues/176Shift to a non-GPL library for FFT/DCT computation2016-08-04T09:32:26ZAndré AnjosShift to a non-GPL library for FFT/DCT computation*Created by: laurentes*
Bob (1.2.x) currently relies on FFTW for FFT/DCT computation. FFTW has a GPL license. We are now considering to turn the license of Bob from GPL to BSD. This would imply that we should not link any more against G...*Created by: laurentes*
Bob (1.2.x) currently relies on FFTW for FFT/DCT computation. FFTW has a GPL license. We are now considering to turn the license of Bob from GPL to BSD. This would imply that we should not link any more against GPL libraries. FFTW is the only GPL dependence that we have.
Furthermore, we are looking for alternatives to FFTW. There are already naive implementations of DFT/DCT in bob, which are used for testing purposes. But there are really slow for large arrays. We are hence looking for more optimized source code. I have performed few tests to rely on two different BSD-like FFT libraries:
1. Kiss FFT (C and C++ implementation): I was not able to make the C implementation working with 'double' instead of default 'float'. This just provides wrong outputs. And the documentation is quite poor. The C++ implementation is working with 'double', but it does only support 1D FFT (No nD FFT or DCT computation). However, I still had to tweak/fix the code to make it compatible with all the platforms we are supporting.
2. NumPy's FFT implementation (C code based on former FFTPACK fortran's implementation [P.S.: Note that SciPy also provides a different FFT implementation based on the original FFTPACK fortran's code]:. NumPy's implementation only provide FFT (not DCT). The code is much larger than the one of kiss FFT (2k lines vs 200 lines), but is probably more reliable, since it has been used for several years by this wide deployed library.
For both solutions, we won't add a new dependence, but instead we would just cannibalize the FFT source code into bob central repository (There is no Ubuntu/OS X packages for kiss FFT any way). I have pushed FFT/DCT implementations in the master branch that rely on both libraries (separately), to keep track of all the tests I did. Solution 2. is my favourite so far. If we go for it, I will just remove the FFTW and kiss FFT-based implementations, and renamed the FFT1DNumpy (2D/DCT, etc.) classes into FFT1D. The documentation should then be carefully updated. As the underlying implementation's will be different, this may slightly affect the outputs/features/results generated with FFTW.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/174bob.ip.draw_... methods take arguments in wrong order2013-11-15T19:55:28ZAndré Anjosbob.ip.draw_... methods take arguments in wrong order*Created by: siebenkopf*
Usually, points in Bob are given in (y,x) order, and functions always take (y,x) as arguments (in this order). Having a look at the documentation of the bob.ip.draw_point (and similar) functions, they take argum...*Created by: siebenkopf*
Usually, points in Bob are given in (y,x) order, and functions always take (y,x) as arguments (in this order). Having a look at the documentation of the bob.ip.draw_point (and similar) functions, they take arguments in order (x,y).
A fix of this would be nice, to have a consistent order of arguments in Bob.v2.0https://gitlab.idiap.ch/bob/bob/-/issues/169bob.core.random has many classes for different data types2016-08-04T09:32:16ZAndré Anjosbob.core.random has many classes for different data types*Created by: siebenkopf*
Having a look at the bob.core.random module, I can find several bindings of classes for different data types. Instead of having all these classes, I would suggest two solutions:
1. We have one class for each ...*Created by: siebenkopf*
Having a look at the bob.core.random module, I can find several bindings of classes for different data types. Instead of having all these classes, I would suggest two solutions:
1. We have one class for each distribution type and a dtype-like parameter for the constructor.
2. We have only one class *overall*, having the dtype and the distribution type as parameters.
Either of the solutions will break the API, but I think we should avoid these data type specific classes and functions. In C++, these classes are templated, anyways...v2.0https://gitlab.idiap.ch/bob/bob/-/issues/154bob.k and others are fun but unexpected2013-07-23T06:53:53ZAndré Anjosbob.k and others are fun but unexpected*Created by: khoury*
bob.k, bob.core.k, bob.core.random.k, bob.io.k, bob.ip.k, bob.sp.k, bob.measure.k and others are caused by the lines:
__all__ = [k for k in dir() if not k.startswith('_')]
This should likely be replaced by:
__all...*Created by: khoury*
bob.k, bob.core.k, bob.core.random.k, bob.io.k, bob.ip.k, bob.sp.k, bob.measure.k and others are caused by the lines:
__all__ = [k for k in dir() if not k.startswith('_')]
This should likely be replaced by:
__all__ = dir()https://gitlab.idiap.ch/bob/bob/-/issues/151Making python bindings more consistent when using blitz arrays and std::vecto...2016-08-04T09:31:43ZAndré AnjosMaking python bindings more consistent when using blitz arrays and std::vector of blitz arrays*Created by: laurentes*
Our current python bindings that relies on C++ methods/functions that take blitz arrays as arguments are quite heterogeneous. Ideally, we should follow this way:
Given a class:
```c++
class Myclass {
publ...*Created by: laurentes*
Our current python bindings that relies on C++ methods/functions that take blitz arrays as arguments are quite heterogeneous. Ideally, we should follow this way:
Given a class:
```c++
class Myclass {
public:
void setW(const blitz::Array<double,1>& w) { m_w = w; }
const blitz::Array<double,1>& getW() { return m_w; }
private:
blitz::Array<double,1> m_w;
};
```
The python binding could be done as follows:
```c++
static void py_setW(bob::Myclass& m, bob::python::const_ndarray w) {
machine.setW(w.bz<double,1>());
}
class_<bob::Myclass, boost::shared_ptr<bob::Myclass> >("Myclass",
"This class implements ...", init<>())
.add_property("w", make_function(&bob::Myclass::getW, return_value_policy<copy_const_reference>()), &py_setW, "Paramaters for ...")
```
For the getter, this will make a copy of the array and cast it into a NumPy array.
For the setter, this allows various type (NumPy array, Python list) to be supported, and exceptions are managed by the bz<>() method.
For std::vector of blitz::Array's, the following could be done:
Given a class:
```c++
class Myclass {
public:
void setW(const std::vector<blitz::Array<double,1> >& w) { m_w = ... }
const std::vector<blitz::Array<double,1> >& getW() { return m_w; }
private:
std::vector<blitz::Array<double,1> > m_w;
};
```
The python binding could be done as follows:
```c++
static void py_setW(bob::Myclass& m, object w) {
stl_input_iterator<bob::python::const_ndarray> dbegin(w), dend;
std::vector<bob::python::const_ndarray> wdata(dbegin, dend);
std::vector<blitz::Array<double,1> > wb;
for(size_t i=0; i<wdata.size(); ++i)
wb.push_back(wdata[i].bz<double,1>());
machine.setW(wb);
}
static object py_getW(bob::Myclass& m) {
boost::python::list l;
const std::vector<double,1>& w = m.getW();
for(size_t i=0; i<w.size(); ++i)
l.append(w[i]);
return boost::python::tuple(l);
}
class_<bob::Myclass, boost::shared_ptr<bob::Myclass> >("Myclass",
"This class implements ...", init<>())
.add_property("w", &py_getW, &py_setW, "Paramaters for ...")
```
This way, the setter allows heterogeneous python type (NumPy array, python list) and the getters relies on the copy automagically done from blitz to NumPy.