bob issueshttps://gitlab.idiap.ch/groups/bob/-/issues2020-10-08T08:05:10Zhttps://gitlab.idiap.ch/bob/bob.pad.face/-/issues/36Issue with docs2020-10-08T08:05:10ZAnjith GEORGEanjith.george@idiap.chIssue with docsThe docs says there is a script `evaluate.py`, which I couldnt find.
`https://gitlab.idiap.ch/bob/bob.pad.face/-/blob/master/doc/baselines.rst#L122`The docs says there is a script `evaluate.py`, which I couldnt find.
`https://gitlab.idiap.ch/bob/bob.pad.face/-/blob/master/doc/baselines.rst#L122`Anjith GEORGEanjith.george@idiap.chAnjith GEORGEanjith.george@idiap.chhttps://gitlab.idiap.ch/bob/bob.measure/-/issues/63bob measure pure python2021-06-10T11:11:12ZTiago de Freitas Pereirabob measure pure pythonHi @bob,
Following our renovation efforts for Bob, shall we make an effort to port this package to be pure python?
The benefits would be:
- Pure python is more convenient (no platform-dependent) than having a compiled one
- More read...Hi @bob,
Following our renovation efforts for Bob, shall we make an effort to port this package to be pure python?
The benefits would be:
- Pure python is more convenient (no platform-dependent) than having a compiled one
- More readable code. Hence, more people would be willing to contribute
- We would get rid of a blitz dependency (that will die at some point)
The drawbacks:
- We would lose the C++ API (does anyone need that?)
- Extra work
Follow bellow all the functions that would need to be ported.
- bob::measure::farfrr
- bob::measure::precision_recall
- bob::measure::f_score
- bob::measure::correctlyClassifiedPositives
- bob::measure::correctlyClassifiedNegatives
- bob::measure::minimizingThreshold
- bob::measure::eerThreshold
- bob::measure::eerRocch
- bob::measure::minWeightedErrorRateThreshold
- bob::measure::minHterThreshold
- bob::measure::farThreshold
- bob::measure::frrThreshold
- bob::measure::log_values
- bob::measure::meaningfulThresholds(
- bob::measure::roc
- bob::measure::precision_recall_curve
- bob::measure::rocc
- hbob::measure::rocch2eer
- bob::measure::roc_for_far
- bob::measure::ppndf
- bob::measure::det
- bob::measure::epc
Thankshttps://gitlab.idiap.ch/bob/bob.ip.stereo/-/issues/2In painting has different output based on computer OS2020-08-03T14:03:02ZVincent POLLETIn painting has different output based on computer OSJob [#204940](https://gitlab.idiap.ch/bob/bob.ip.stereo/-/jobs/204940) failed for aa03ba735badaad5f13e062e5074584b66fadf34:
When the `StereoParameter` has `inpaint = True` like [here](https://gitlab.idiap.ch/bob/bob.ip.stereo/-/blob/deb...Job [#204940](https://gitlab.idiap.ch/bob/bob.ip.stereo/-/jobs/204940) failed for aa03ba735badaad5f13e062e5074584b66fadf34:
When the `StereoParameter` has `inpaint = True` like [here](https://gitlab.idiap.ch/bob/bob.ip.stereo/-/blob/debug_inpaint/bob/ip/stereo/test/test.py#L49), the disparity is modified [here](https://gitlab.idiap.ch/bob/bob.ip.stereo/-/blob/master/bob/ip/stereo/stereo.py#L93) and the output of `cv.inpaint` is a little bit different depending on the OS the code is running on. The differences must be amplified later on because the final output image has pixels with rather different values, even though the images look very similar. This makes the test fail on mac runners in the CI.
@dgeissbuhler Did you observe this kind of behaviour from opencv function before ? Can we use another inpainting method ?
The versions of opencv are the same on Linux and Mac ; could the difference could come from the underlying algebra librairies ?
## Opencv version sur linux et mac
| | | |
|---|---|---|
libopencv | 3.4.2 | hb342d67_1 | defaults
opencv | 3.4.2 | py37h6fd60c2_1 | defaults
py-opencv | 3.4.2 | py37hb342d67_1 | defaults
# Output on linux
![rep_color_mac_lin_disparity](/uploads/6670692ca814ebc831f0c949d6f9e6bb/rep_color_mac_lin_disparity.png)
# Output on mac
![rep_color_mac_mac_disparity](/uploads/0d5f4c571914453921453c8d7d7815d1/rep_color_mac_mac_disparity.png)
# Difference between linux and mac
![output_diff](/uploads/0324b25dad47d75c5c0757e78aa53d55/output_diff.png)
[disparity_linux.npy](/uploads/26fb411a4ff963564ee35a3fbfa77456/disparity_lin.npy)
[disparity_mac.npy](/uploads/269cd455bb62c71cd45cd2b0c7a1ed01/disparity_mac.npy)
[output_linux.npy](/uploads/dc0d3aae2f5a1b23544d04d047c29a71/rep_color_mac_lin_disparity.npy)
[output_map.npy](/uploads/4a67dd48afdc1b20a8fa939fdb7a32e5/rep_color_mac_mac_disparity.npy)Vincent POLLETVincent POLLEThttps://gitlab.idiap.ch/bob/bob.devtools/-/issues/57noarch packages are built and uploaded twice2020-07-30T13:25:21ZAmir MOHAMMADInoarch packages are built and uploaded twiceSee: https://gitlab.idiap.ch/bob/bob.devtools/-/jobs/204868
noarch packages are built in both linux and mac and the resulting packages are uploaded twice.See: https://gitlab.idiap.ch/bob/bob.devtools/-/jobs/204868
noarch packages are built in both linux and mac and the resulting packages are uploaded twice.Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/docs/-/issues/8docs requires changes here to build bob.ip.binseg properly2020-07-24T14:16:43ZAmir MOHAMMADIdocs requires changes here to build bob.ip.binseg properlySee https://gitlab.idiap.ch/bob/docs/-/jobs/204340
- There are `_templates` and `_static` folders in bob.ip.binseg
- There are these warnings:
```
bob.ip.binseg/doc/cli.rst:13: WARNING: Unknown directive type "command-output"
```See https://gitlab.idiap.ch/bob/docs/-/jobs/204340
- There are `_templates` and `_static` folders in bob.ip.binseg
- There are these warnings:
```
bob.ip.binseg/doc/cli.rst:13: WARNING: Unknown directive type "command-output"
```André AnjosAndré Anjoshttps://gitlab.idiap.ch/bob/bob.devtools/-/issues/56python-gitlab in our beta channel2020-07-29T14:07:11ZAmir MOHAMMADIpython-gitlab in our beta channelI don't know why but somehow we have the `python-gitlab` package in our beta channel:
![image](/uploads/677dd62b2f514d899aa579754c3544b3/image.png)
from https://www.idiap.ch/software/bob/conda/label/beta/linux-64/?C=N;O=AI don't know why but somehow we have the `python-gitlab` package in our beta channel:
![image](/uploads/677dd62b2f514d899aa579754c3544b3/image.png)
from https://www.idiap.ch/software/bob/conda/label/beta/linux-64/?C=N;O=AAndré AnjosAndré Anjoshttps://gitlab.idiap.ch/bob/bob.io.stream/-/issues/3warp transform changes dtype2020-08-07T08:33:14ZVincent POLLETwarp transform changes dtypeUsing the `warp` transform changes the dtype of the data to `float64`
In particular, it happens during the `warp` stream `process_frame` [function](https://gitlab.idiap.ch/bob/bob.io.stream/-/blob/unit_tests/bob/io/stream/stream.py#L435...Using the `warp` transform changes the dtype of the data to `float64`
In particular, it happens during the `warp` stream `process_frame` [function](https://gitlab.idiap.ch/bob/bob.io.stream/-/blob/unit_tests/bob/io/stream/stream.py#L435)
This happens even though `preserve_range=True` is passed to `sk-image.transform.warp`, which should preserve it according to the [doc](https://scikit-image.org/docs/dev/api/skimage.transform.html?highlight=warp#warp).
@dgeissbuhler Do you have an idea why this is happening ?
Minimal example (unit_test branch):
```python
import bob.io.stream
import numpy as np
from pkg_resources import resource_filename
f = bob.io.stream.StreamFile(
"test/data/input_example.h5",
"config/idiap_face_streams.json",
resource_filename("bob.ip.stereo", "config/idiap_face_calibration.json"),
)
color = f.stream("color")
warp_color = color.warp(color)
print(color[0].dtype)
print(warp_color[0].dtype)
```
Output:
`uint8`
`float64`
ThanksVincent POLLETVincent POLLEThttps://gitlab.idiap.ch/bob/bob.ip.qualitymeasure/-/issues/5Move bob.io packages to optional or test dependencies2020-07-13T10:11:53ZAmir MOHAMMADIMove bob.io packages to optional or test dependenciesRight now installing bob.ip.qualitymeasure also install bob.io.video and bob.io.image. This is a shame because those packages are not necessarily required for this package to function.Right now installing bob.ip.qualitymeasure also install bob.io.video and bob.io.image. This is a shame because those packages are not necessarily required for this package to function.Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.bio.vein/-/issues/20UTFVP database2020-07-16T09:39:19ZHatef OTROSHIUTFVP databaseHi there,
I want to make some experiments using `utfvp` database which is available in `bob.bio.vein` by using the following script:
```
from bob.bio.vein.configurations.utfvp import database
```
However, I can not use `eval` and `world`...Hi there,
I want to make some experiments using `utfvp` database which is available in `bob.bio.vein` by using the following script:
```
from bob.bio.vein.configurations.utfvp import database
```
However, I can not use `eval` and `world` groups for evaluation using `verify.py`.
- when using `eval` group, it can not be found
- when using `world` group, it can calculate the scores but `scores-world` file is empty.
Does anyone have any idea how I should fix these issues?https://gitlab.idiap.ch/bob/bob.devtools/-/issues/55New release2020-07-02T08:49:14ZTiago de Freitas PereiraNew releaseHi,
I would like to release a new version of gridtk and for this, I need to release this one.
Can I do it?
ThanksHi,
I would like to release a new version of gridtk and for this, I need to release this one.
Can I do it?
Thankshttps://gitlab.idiap.ch/bob/bob.devtools/-/issues/54undefined variable in build.py2020-06-24T11:16:38ZVincent POLLETundefined variable in build.pyHi, the variable `path` is undefined [here](https://gitlab.idiap.ch/bob/bob.devtools/-/blob/master/bob/devtools/build.py#L651)
It must have been refactored during the recent work on `base_build` but I could not find where exactly. In ht...Hi, the variable `path` is undefined [here](https://gitlab.idiap.ch/bob/bob.devtools/-/blob/master/bob/devtools/build.py#L651)
It must have been refactored during the recent work on `base_build` but I could not find where exactly. In https://gitlab.idiap.ch/bob/bob.devtools/-/commit/7ead642b2393bfc750d80aecc2873956575aabb7 it is changed to `recipe_dir`
It causes the pipeline of https://gitlab.idiap.ch/bob/bob.conda/-/merge_requests/444 to fail.
Thankshttps://gitlab.idiap.ch/bob/bob.extension/-/issues/75rc subsystem is not very testable2020-07-22T08:50:49ZAndré Anjosrc subsystem is not very testableI'm trying to do tests on libraries that work with the RC subsystem.
One of the problems I'm facing is that it is difficult to mock it - once `bob.extension` is imported, the contents of the RC file is loaded and there seems to be no ea...I'm trying to do tests on libraries that work with the RC subsystem.
One of the problems I'm facing is that it is difficult to mock it - once `bob.extension` is imported, the contents of the RC file is loaded and there seems to be no easy way to affect such a value in a cross-module way.
Ideally, what you'd like to do is something like this:
```python
@mock_bobs_rc('var1', 'override-value1')
def test():
# now, code that uses bob.extension.rc will use the mock value
# notice that the code is not necessarily here, but may be used several layers down
```
Can you think of an easy solution to this?André AnjosAndré Anjoshttps://gitlab.idiap.ch/bob/bob.bio.base/-/issues/134`check_existence` flag incorrectly handled in filelistdatabase query2020-06-03T08:58:06ZManuel Günthersiebenkopf@googlemail.com`check_existence` flag incorrectly handled in filelistdatabase queryIn https://gitlab.idiap.ch/bob/bob.bio.base/-/blob/3efccd3b637ee73ec68ed0ac5fde2667a943bd6e/bob/bio/base/database/filelist/query.py#L833, the `check_existence` flag is said to be ignored when multiple original extensions are specified, w...In https://gitlab.idiap.ch/bob/bob.bio.base/-/blob/3efccd3b637ee73ec68ed0ac5fde2667a943bd6e/bob/bio/base/database/filelist/query.py#L833, the `check_existence` flag is said to be ignored when multiple original extensions are specified, while it is actually not ignored.
Also, when only a single extension is specified, the `check_existence` flag is tested incorrectly.Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.pipelines/-/issues/18SampleBatch design issues2020-07-22T14:26:59ZTiago de Freitas PereiraSampleBatch design issuesHi,
Although `SampleBatch` brings convenience and efficiency, it forces us to develop transformers that are compatible with it.
Imagine the simple transformer bellow:
```python
class FakeTransformer(TransformerMixin, BaseEstimator):
...Hi,
Although `SampleBatch` brings convenience and efficiency, it forces us to develop transformers that are compatible with it.
Imagine the simple transformer bellow:
```python
class FakeTransformer(TransformerMixin, BaseEstimator):
def fit(self, X, y=None):
return self
def transform(self, X):
return X + 1
def _more_tags(self):
return {"stateless": True, "requires_fit": False}
```
I can easily use it with numpy arrays as input.
```python
transformer = FakeTransformer()
X = np.zeros(shape=(3, 160, 160))
transformed_X = transformer.transform(X)
```
However, I run into problems once I wrap it as a sample
```python
sample = Sample(X)
transformer_sample = wrap(["sample"], transformer)
my_beautiful_sample = [s.data for s in transformer_sample.transform([sample])]
# THIS DOESN'T WORK
```
With this wrap, the input `X` of `FakeTransformer.transform` will be `SampleBatch` and not numpy array.
Hence, I can't do `X+1`.
I can approach this issue in my transformer by doing this:
```python
def transform(self, X):
X = np.asarray(X)
return X + 1
```
However, this is a blocker if we want to use estimators developed by other people outside of our circle.
Do you think it is sensible to have `X` wrapped as a `SampleBatch` once SampleTransform is used?
It breaks encapsulation.
Thankshttps://gitlab.idiap.ch/bob/bob.db.morph/-/issues/3Database.zprobes and Database.treferences fraction takes a fraction of....2020-12-22T08:28:23ZTiago de Freitas PereiraDatabase.zprobes and Database.treferences fraction takes a fraction of.......absulute values.
This function should take fraction from each cohort....absulute values.
This function should take fraction from each cohort.https://gitlab.idiap.ch/bob/bob.db.morph/-/issues/2This dataset has wrong annotations2020-12-22T08:28:41ZTiago de Freitas PereiraThis dataset has wrong annotationsHey @ydayer,
I'm rearranging world, dev, and eval in this dataset.
Have you noticed that the metadata is inconsistent?
For instance, if you do:
```python
>>> dataframe[dataframe.id_num==286810]
id_num picture_num dob ...Hey @ydayer,
I'm rearranging world, dev, and eval in this dataset.
Have you noticed that the metadata is inconsistent?
For instance, if you do:
```python
>>> dataframe[dataframe.id_num==286810]
id_num picture_num dob doa race gender age photo
37850 286810 1 04/04/1986 05/11/2006 B M 20 Album2/286810_01M20.JPG
37851 286810 2 04/04/1986 08/16/2006 A M 20 Album2/286810_02M20.JPG
37849 286810 0 04/04/1986 01/24/2006 H M 19 Album2/286810_00M19.JPG
```
```python
>>> dataframe[dataframe.id_num==295087]
id_num picture_num dob doa race gender age photo
39551 295087 0 05/18/1960 10/23/2006 A M 46 Album2/295087_00M46.JPG
39552 295087 1 05/18/1960 10/25/2006 H M 46 Album2/295087_01M46.JPG
````
```python
dataframe[dataframe.id_num==328749]
id_num picture_num dob doa race gender age photo
50810 328749 0 07/28/1971 05/12/2006 W M 34 Album2/328749_00M34.JPG
50811 328749 1 07/28/1971 05/19/2007 A M 35 Album2/328749_01M35.JPG
```
There are several more exampleshttps://gitlab.idiap.ch/bob/bob.ip.gabor/-/issues/5JetStatistics divides by the wrong value2020-06-03T06:45:13ZManuel Günthersiebenkopf@googlemail.comJetStatistics divides by the wrong valueIn line https://gitlab.idiap.ch/bob/bob.ip.gabor/-/blob/94bd69147ca4450ab9f975efab5ee31bbf3edefd/bob/ip/gabor/cpp/JetStatistics.cpp#L145 we divide by `m_varAbs(j)`, but we'd need to divide by `m_meanAbs(j)`.In line https://gitlab.idiap.ch/bob/bob.ip.gabor/-/blob/94bd69147ca4450ab9f975efab5ee31bbf3edefd/bob/ip/gabor/cpp/JetStatistics.cpp#L145 we divide by `m_varAbs(j)`, but we'd need to divide by `m_meanAbs(j)`.Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/36outdated `baselines.py` script remained in setup.py2020-10-08T07:25:59ZManuel Günthersiebenkopf@googlemail.comoutdated `baselines.py` script remained in setup.pySee here: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/117ed4a764bac88c9e0d3c5b27c8c36d5cf87a61/setup.py#L106
When running `baselines.py`, I get the following error:
```
$ baselines.py -h
...
ModuleNotFoundError: No module named 'bob...See here: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/117ed4a764bac88c9e0d3c5b27c8c36d5cf87a61/setup.py#L106
When running `baselines.py`, I get the following error:
```
$ baselines.py -h
...
ModuleNotFoundError: No module named 'bob.bio.face.script.baselines'
```Bob 9.0.0https://gitlab.idiap.ch/bob/bob.devtools/-/issues/53Implement hot-fix to repo indexing2020-07-29T14:07:12ZAndré AnjosImplement hot-fix to repo indexingTo fix several issues that we are having with our conda channel, I am going to
implement [this hotfixing mechanism](https://github.com/AnacondaRecipes/repodata-hotfixes)
done for the defaults channel for our channel as well.
This will:
...To fix several issues that we are having with our conda channel, I am going to
implement [this hotfixing mechanism](https://github.com/AnacondaRecipes/repodata-hotfixes)
done for the defaults channel for our channel as well.
This will:
1. remove the need of moving of broken packages to our archive channel. Instead, we will keep the package in the same place but remove it from the index.
2. allow us to fix broken packages in our channel. Like fixing bob.bio.base to make
sure it does not get installed with `numpy>=1.8`.
3. allow us to temporarily add packages (like mac versions of pytorch and torchvision)
in our channel to fix our problems and remove them from channel index once the
defaults channel catches up.
But to implement this care is needed from @bob users when they try to export an environment.
In summary, you should avoid mixing `conda env export` and `conda list --export --explicit`.
These 2 commands are designed in conda with two different goals and you should not use them
for other purposes. I will explain what to do below:
# Reproducibility and Repeatability of publications (bob.paper packages)
You should use `conda list --export --explicit` or even `conda list --export --explicit --md5`
to export your environment for other users to replicate your environment.
```
Save packages for future use:
conda list --export --explicit > package-list.txt
or
conda list --export --explicit --md5 > package-list.txt
Reinstall packages from an export file:
conda create -n myenv --file package-list.txt
```
This method is, of course, not bullet proof but it should work reliably.
If you use `conda env export`, the environment **will** most likely break in the future.
# Share current ongoing work/projects (bob.project packages)
Sometimes, you want to share a common conda environment between colleagues while a
project is continuing. You may even update this environment regularly.
For this purpose, you should use `conda env export` or even better,
create your `environment.yml` by hand. You may create environment files that work
both on Linux and mac.
```
Create the environment file either by hand or by
conda env export --file=environment.yml
Recreate the environment using:
conda env create --file=environment.yml
```
Expect this environment to become broken from time to time and it might need updates.
To avoid *some* breakage, do not pin the build strings, i.e.
instead of `bob.bio.base=4.1.0=py37h03d05df_0`, write `bob.bio.base=4.1.0`.
Also, you may want to only list your direct dependencies only.
Of course, you can choose to export to both formats in any scenario.Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.pipelines/-/issues/17Memory error during serialization of large objects2020-05-25T08:48:53ZTiago de Freitas PereiraMemory error during serialization of large objectsThis is an issue that I'm facing for a while.
Now we are running our pipelines in large scale experiments (several thousands of images), the list of SampleSets that we are generating during `pipeline.transform` are getting BIG (>1GB) an...This is an issue that I'm facing for a while.
Now we are running our pipelines in large scale experiments (several thousands of images), the list of SampleSets that we are generating during `pipeline.transform` are getting BIG (>1GB) and this is raising some MemoryError Exceptions during serialization (even when we have enough memory).
This is very annoying, basically, I can't work with large datasets.
I managed to generate a very simple example describing this issue here: https://github.com/dask/distributed/issues/3806
I know we can change the serializer` dask-distributed` uses (https://distributed.dask.org/en/latest/serialization.html#use), but I'm not sure if this is the real problem.
However, I would like to propose a workaround that will slow down a bit the execution of experiments, but, at least, the code will not crash.
I would like to change the serialization behavior of DelayedSamples to this.
```python
class DelayedSample(_ReprMixin):
def __init__(self, load, parent=None, **kwargs):
self.load = load
if parent is not None:
_copy_attributes(self, parent.__dict__)
_copy_attributes(self, kwargs)
self._data = None
@property
def data(self):
"""Loads the data from the disk file."""
if self._data is None:
self._data = self.load()
return self._data
def __getstate__(self):
self._data = None
d = dict(self.__dict__)
return d
```
What do you think? ping @andre.anjos @amohammadi
ping @ydayer
thanks