bob issueshttps://gitlab.idiap.ch/groups/bob/-/issues2020-06-03T08:58:06Zhttps://gitlab.idiap.ch/bob/bob.bio.base/-/issues/134`check_existence` flag incorrectly handled in filelistdatabase query2020-06-03T08:58:06ZManuel Günthersiebenkopf@googlemail.com`check_existence` flag incorrectly handled in filelistdatabase queryIn https://gitlab.idiap.ch/bob/bob.bio.base/-/blob/3efccd3b637ee73ec68ed0ac5fde2667a943bd6e/bob/bio/base/database/filelist/query.py#L833, the `check_existence` flag is said to be ignored when multiple original extensions are specified, w...In https://gitlab.idiap.ch/bob/bob.bio.base/-/blob/3efccd3b637ee73ec68ed0ac5fde2667a943bd6e/bob/bio/base/database/filelist/query.py#L833, the `check_existence` flag is said to be ignored when multiple original extensions are specified, while it is actually not ignored.
Also, when only a single extension is specified, the `check_existence` flag is tested incorrectly.Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.pipelines/-/issues/18SampleBatch design issues2020-07-22T14:26:59ZTiago de Freitas PereiraSampleBatch design issuesHi,
Although `SampleBatch` brings convenience and efficiency, it forces us to develop transformers that are compatible with it.
Imagine the simple transformer bellow:
```python
class FakeTransformer(TransformerMixin, BaseEstimator):
...Hi,
Although `SampleBatch` brings convenience and efficiency, it forces us to develop transformers that are compatible with it.
Imagine the simple transformer bellow:
```python
class FakeTransformer(TransformerMixin, BaseEstimator):
def fit(self, X, y=None):
return self
def transform(self, X):
return X + 1
def _more_tags(self):
return {"stateless": True, "requires_fit": False}
```
I can easily use it with numpy arrays as input.
```python
transformer = FakeTransformer()
X = np.zeros(shape=(3, 160, 160))
transformed_X = transformer.transform(X)
```
However, I run into problems once I wrap it as a sample
```python
sample = Sample(X)
transformer_sample = wrap(["sample"], transformer)
my_beautiful_sample = [s.data for s in transformer_sample.transform([sample])]
# THIS DOESN'T WORK
```
With this wrap, the input `X` of `FakeTransformer.transform` will be `SampleBatch` and not numpy array.
Hence, I can't do `X+1`.
I can approach this issue in my transformer by doing this:
```python
def transform(self, X):
X = np.asarray(X)
return X + 1
```
However, this is a blocker if we want to use estimators developed by other people outside of our circle.
Do you think it is sensible to have `X` wrapped as a `SampleBatch` once SampleTransform is used?
It breaks encapsulation.
Thankshttps://gitlab.idiap.ch/bob/bob.db.morph/-/issues/3Database.zprobes and Database.treferences fraction takes a fraction of....2020-12-22T08:28:23ZTiago de Freitas PereiraDatabase.zprobes and Database.treferences fraction takes a fraction of.......absulute values.
This function should take fraction from each cohort....absulute values.
This function should take fraction from each cohort.https://gitlab.idiap.ch/bob/bob.db.morph/-/issues/2This dataset has wrong annotations2020-12-22T08:28:41ZTiago de Freitas PereiraThis dataset has wrong annotationsHey @ydayer,
I'm rearranging world, dev, and eval in this dataset.
Have you noticed that the metadata is inconsistent?
For instance, if you do:
```python
>>> dataframe[dataframe.id_num==286810]
id_num picture_num dob ...Hey @ydayer,
I'm rearranging world, dev, and eval in this dataset.
Have you noticed that the metadata is inconsistent?
For instance, if you do:
```python
>>> dataframe[dataframe.id_num==286810]
id_num picture_num dob doa race gender age photo
37850 286810 1 04/04/1986 05/11/2006 B M 20 Album2/286810_01M20.JPG
37851 286810 2 04/04/1986 08/16/2006 A M 20 Album2/286810_02M20.JPG
37849 286810 0 04/04/1986 01/24/2006 H M 19 Album2/286810_00M19.JPG
```
```python
>>> dataframe[dataframe.id_num==295087]
id_num picture_num dob doa race gender age photo
39551 295087 0 05/18/1960 10/23/2006 A M 46 Album2/295087_00M46.JPG
39552 295087 1 05/18/1960 10/25/2006 H M 46 Album2/295087_01M46.JPG
````
```python
dataframe[dataframe.id_num==328749]
id_num picture_num dob doa race gender age photo
50810 328749 0 07/28/1971 05/12/2006 W M 34 Album2/328749_00M34.JPG
50811 328749 1 07/28/1971 05/19/2007 A M 35 Album2/328749_01M35.JPG
```
There are several more exampleshttps://gitlab.idiap.ch/bob/bob.ip.gabor/-/issues/5JetStatistics divides by the wrong value2020-06-03T06:45:13ZManuel Günthersiebenkopf@googlemail.comJetStatistics divides by the wrong valueIn line https://gitlab.idiap.ch/bob/bob.ip.gabor/-/blob/94bd69147ca4450ab9f975efab5ee31bbf3edefd/bob/ip/gabor/cpp/JetStatistics.cpp#L145 we divide by `m_varAbs(j)`, but we'd need to divide by `m_meanAbs(j)`.In line https://gitlab.idiap.ch/bob/bob.ip.gabor/-/blob/94bd69147ca4450ab9f975efab5ee31bbf3edefd/bob/ip/gabor/cpp/JetStatistics.cpp#L145 we divide by `m_varAbs(j)`, but we'd need to divide by `m_meanAbs(j)`.Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.devtools/-/issues/53Implement hot-fix to repo indexing2020-07-29T14:07:12ZAndré AnjosImplement hot-fix to repo indexingTo fix several issues that we are having with our conda channel, I am going to
implement [this hotfixing mechanism](https://github.com/AnacondaRecipes/repodata-hotfixes)
done for the defaults channel for our channel as well.
This will:
...To fix several issues that we are having with our conda channel, I am going to
implement [this hotfixing mechanism](https://github.com/AnacondaRecipes/repodata-hotfixes)
done for the defaults channel for our channel as well.
This will:
1. remove the need of moving of broken packages to our archive channel. Instead, we will keep the package in the same place but remove it from the index.
2. allow us to fix broken packages in our channel. Like fixing bob.bio.base to make
sure it does not get installed with `numpy>=1.8`.
3. allow us to temporarily add packages (like mac versions of pytorch and torchvision)
in our channel to fix our problems and remove them from channel index once the
defaults channel catches up.
But to implement this care is needed from @bob users when they try to export an environment.
In summary, you should avoid mixing `conda env export` and `conda list --export --explicit`.
These 2 commands are designed in conda with two different goals and you should not use them
for other purposes. I will explain what to do below:
# Reproducibility and Repeatability of publications (bob.paper packages)
You should use `conda list --export --explicit` or even `conda list --export --explicit --md5`
to export your environment for other users to replicate your environment.
```
Save packages for future use:
conda list --export --explicit > package-list.txt
or
conda list --export --explicit --md5 > package-list.txt
Reinstall packages from an export file:
conda create -n myenv --file package-list.txt
```
This method is, of course, not bullet proof but it should work reliably.
If you use `conda env export`, the environment **will** most likely break in the future.
# Share current ongoing work/projects (bob.project packages)
Sometimes, you want to share a common conda environment between colleagues while a
project is continuing. You may even update this environment regularly.
For this purpose, you should use `conda env export` or even better,
create your `environment.yml` by hand. You may create environment files that work
both on Linux and mac.
```
Create the environment file either by hand or by
conda env export --file=environment.yml
Recreate the environment using:
conda env create --file=environment.yml
```
Expect this environment to become broken from time to time and it might need updates.
To avoid *some* breakage, do not pin the build strings, i.e.
instead of `bob.bio.base=4.1.0=py37h03d05df_0`, write `bob.bio.base=4.1.0`.
Also, you may want to only list your direct dependencies only.
Of course, you can choose to export to both formats in any scenario.Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.pipelines/-/issues/17Memory error during serialization of large objects2020-05-25T08:48:53ZTiago de Freitas PereiraMemory error during serialization of large objectsThis is an issue that I'm facing for a while.
Now we are running our pipelines in large scale experiments (several thousands of images), the list of SampleSets that we are generating during `pipeline.transform` are getting BIG (>1GB) an...This is an issue that I'm facing for a while.
Now we are running our pipelines in large scale experiments (several thousands of images), the list of SampleSets that we are generating during `pipeline.transform` are getting BIG (>1GB) and this is raising some MemoryError Exceptions during serialization (even when we have enough memory).
This is very annoying, basically, I can't work with large datasets.
I managed to generate a very simple example describing this issue here: https://github.com/dask/distributed/issues/3806
I know we can change the serializer` dask-distributed` uses (https://distributed.dask.org/en/latest/serialization.html#use), but I'm not sure if this is the real problem.
However, I would like to propose a workaround that will slow down a bit the execution of experiments, but, at least, the code will not crash.
I would like to change the serialization behavior of DelayedSamples to this.
```python
class DelayedSample(_ReprMixin):
def __init__(self, load, parent=None, **kwargs):
self.load = load
if parent is not None:
_copy_attributes(self, parent.__dict__)
_copy_attributes(self, kwargs)
self._data = None
@property
def data(self):
"""Loads the data from the disk file."""
if self._data is None:
self._data = self.load()
return self._data
def __getstate__(self):
self._data = None
d = dict(self.__dict__)
return d
```
What do you think? ping @andre.anjos @amohammadi
ping @ydayer
thankshttps://gitlab.idiap.ch/bob/bob.pipelines/-/issues/15Sample-based pipelines inefficiencies2020-06-02T08:04:29ZAmir MOHAMMADISample-based pipelines inefficienciesThis is a generic issue that I am raising that I believe we will face moving forward.
The biggest issue that I have found with our sample-based approach is when you have to concatenate samples to make a big array for processing steps suc...This is a generic issue that I am raising that I believe we will face moving forward.
The biggest issue that I have found with our sample-based approach is when you have to concatenate samples to make a big array for processing steps such as `.fit` methods.
The reason for this is that we are looking at samples individually, **even though they might have come from a bigger array**.
Let me demonstrate this with an example:
[sample_stacking_issue.html](/uploads/689b77611dd7d51685b43862be9c2686/sample_stacking_issue.html)
or [sample_stacking_issue.ipynb](/uploads/d2139734a2dae8833104c46b0022b2eb/sample_stacking_issue.ipynb)https://gitlab.idiap.ch/bob/bob.io.video/-/issues/16bob.io.video.reader class is not pickable2020-05-04T18:23:20ZAmir MOHAMMADIbob.io.video.reader class is not pickableSee: https://github.com/cloudpipe/cloudpickle/issues/363See: https://github.com/cloudpipe/cloudpickle/issues/363Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.pipelines/-/issues/14Stacking raw data2020-05-05T15:29:09ZTiago de Freitas PereiraStacking raw dataHi,
I haven't noticed that before merging, but the way this `DelayedSamplesCall` handles data is not super convenient in terms of memory usage, don't you think? https://gitlab.idiap.ch/bob/bob.pipelines/-/blob/master/bob/pipelines/wrapp...Hi,
I haven't noticed that before merging, but the way this `DelayedSamplesCall` handles data is not super convenient in terms of memory usage, don't you think? https://gitlab.idiap.ch/bob/bob.pipelines/-/blob/master/bob/pipelines/wrappers.py#L57
Data are stacked at the very beginning of the pipeline, which can be huge in size (e.g raw images/video).
The way we had it before https://gitlab.idiap.ch/bob/bob.pipelines/-/merge_requests/26, was more convenient.
This stacking was done only when necessary (once data is "more preprocessed").
Is there any reason to be like that?
Thankshttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/35Nightlies failing because of this one2020-04-29T06:52:54ZTiago de Freitas PereiraNightlies failing because of this oneThis MR broke https://gitlab.idiap.ch/bob/bob.ip.gabor/-/merge_requests/12
What needs to be done is to cherry-pick these commits 8cf2dd5544957a7c86a3245ba8e5ca65f9c7a9ca and e9e7bbb7e80ea01e37cb54ecc04ce2020e09b25d, and merge them to ma...This MR broke https://gitlab.idiap.ch/bob/bob.ip.gabor/-/merge_requests/12
What needs to be done is to cherry-pick these commits 8cf2dd5544957a7c86a3245ba8e5ca65f9c7a9ca and e9e7bbb7e80ea01e37cb54ecc04ce2020e09b25d, and merge them to masterTiago de Freitas PereiraTiago de Freitas Pereirahttps://gitlab.idiap.ch/bob/bob.bio.vein/-/issues/19DEPRECATION. Question2020-10-07T14:33:40ZTiago de Freitas PereiraDEPRECATION. QuestionHi @bob,
Shall we keep this vein recognition stuff for the next generation?
ThanksHi @bob,
Shall we keep this vein recognition stuff for the next generation?
Thankshttps://gitlab.idiap.ch/bob/bob.pipelines/-/issues/12Relative paths in filelist database causes issues in temp directory2020-07-22T14:24:52ZManuel Günthersiebenkopf@googlemail.comRelative paths in filelist database causes issues in temp directoryAs reported here: https://groups.google.com/forum/#!topic/bob-devel/ESl9AWyJmbA, when using a relative path including `..` in the file lists, there seem to be an issue in the temporary files. Indeed, `biofile.make_path` (see: https://git...As reported here: https://groups.google.com/forum/#!topic/bob-devel/ESl9AWyJmbA, when using a relative path including `..` in the file lists, there seem to be an issue in the temporary files. Indeed, `biofile.make_path` (see: https://gitlab.idiap.ch/bob/bob.db.base/blob/master/bob/db/base/file.py#L65) simply merges paths, which might end up in wrong paths when the `self.path` includes `..`.
I am not quite sure how to tackle this issue.https://gitlab.idiap.ch/bob/bob.pipelines/-/issues/11Mixin classes for sklearn estimators should not have an __init__ method2020-04-29T08:05:41ZAmir MOHAMMADIMixin classes for sklearn estimators should not have an __init__ methodI was thinking that we can get away with this but apparently we cannot. Because of the way that BaseEstimator handles params, providing an extra `__init__` method in mixins will break the estimator.
Here is an example:
```python
In [2]:...I was thinking that we can get away with this but apparently we cannot. Because of the way that BaseEstimator handles params, providing an extra `__init__` method in mixins will break the estimator.
Here is an example:
```python
In [2]: from sklearn.svm import SVC
...: from bob.pipelines.mixins import CheckpointMixin, SampleMixin
...: class CheckpointSampleSVC(CheckpointMixin, SampleMixin, SVC):
...: pass
...:
In [8]: original_estimator = SVC()
In [9]: original_estimator
Out[9]:
SVC(C=1.0, break_ties=False, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='scale', kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
In [10]: original_estimator.set_params(C=2)
Out[10]:
SVC(C=2, break_ties=False, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='scale', kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
In [11]: checkpointing_sample_estimator = CheckpointSampleSVC()
In [12]: checkpointing_sample_estimator
Out[12]:
CheckpointSampleSVC(extension='.h5', features_dir=None,
load_func=<function load at 0x7f1ce85e5290>,
model_path=None,
save_func=<function save at 0x7f1ce85e53b0>)
In [13]: checkpointing_sample_estimator.set_params(C=2)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-13-bbed69696a06> in <module>
----> 1 checkpointing_sample_estimator.set_params(C=2)
conda/envs/dask/lib/python3.7/site-packages/sklearn/base.py in set_params(self, **params)
234 'Check the list of available parameters '
235 'with `estimator.get_params().keys()`.' %
--> 236 (key, self))
237
238 if delim:
ValueError: Invalid parameter C for estimator CheckpointSampleSVC(extension='.h5', features_dir=None,
load_func=<function load at 0x7f1ce85e5290>,
model_path=None,
save_func=<function save at 0x7f1ce85e53b0>). Check the list of available parameters with `estimator.get_params().keys()`.
```
`set_params` is important because it is used in classes like https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCVAmir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.db.meds/-/issues/8Implementation of score normalization sets2020-12-22T08:27:43ZTiago de Freitas PereiraImplementation of score normalization setsHi @ydayer,
Following what we've discussed this morning.
This issue considers only the efforts that need to be done on the database side.
I think we need two methods in the `Database` class.
1. **zprobes**. This method needs to retu...Hi @ydayer,
Following what we've discussed this morning.
This issue considers only the efforts that need to be done on the database side.
I think we need two methods in the `Database` class.
1. **zprobes**. This method needs to return the same structure as returned by the method **probes**. A simple list of `SampleSet`. However, the samples need to be taken from the world set (we don't have data to split even more this set). Another thing, we need to have a kwarg called `percentage`, that will define the percentage of SampleSet this function will return (default to 1.).
2. **treferences**. This method needs to return the same structure as returned by the methods **references**. Again, a simple list of SampleSets, with all the samples from one identity inside.
Make sure that in the SampleSet you add all the metadata that are identity independent (gender, ethnicity,....).
Is this ok?
Thanks, and ping me if any issueYannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.pipelines/-/issues/10Conflicts ad eternum2020-04-17T12:31:42ZTiago de Freitas PereiraConflicts ad eternumHi guys,
I'm facing some problems with our CI and I need some light.
It's been a while that I'm having some enigmatic conflicts, check here one example.
https://gitlab.idiap.ch/bob/bob.pipelines/-/jobs/195496/raw
`bdt build` doesn't...Hi guys,
I'm facing some problems with our CI and I need some light.
It's been a while that I'm having some enigmatic conflicts, check here one example.
https://gitlab.idiap.ch/bob/bob.pipelines/-/jobs/195496/raw
`bdt build` doesn't work at all on the CI and on my computers.
I'm wondering if it's something with Numpy, because of this message at the logs:
```sh
Attempting to finalize metadata for bob.pipelines
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
Adding .* to spec 'numpy 1.16.6' to ensure satisfiability. Please consider putting {{ var_name }}.* or some relational operator (>/</>=/<=) on this spec in meta.yaml, or if req is also a build req, using {{ pin_compatible() }} jinja2 function instead. See https://conda.io/docs/user-guide/tasks/build-packages/variants.html#pinning-at-the-variant-level
WARNING conda_build.utils:ensure_valid_spec(1749): Adding .* to spec 'numpy 1.16.6' to ensure satisfiability. Please consider putting {{ var_name }}.* or some relational operator (>/</>=/<=) on this spec in meta.yaml, or if req is also a build req, using {{ pin_compatible() }} jinja2 function instead. See https://conda.io/docs/user-guide/tasks/build-packages/variants.html#pinning-at-the-variant-level
INFO:bob.devtools.scripts.build@2020-04-17 09:24:21,641: Building bob.pipelines-0.0.1b0-py37 (build: 12) for linux-64
INFO:bob.devtools.bootstrap@2020-04-17 09:24:21,641: environ["BOB_BUILD_NUMBER"] = 12
/scratch/builds/bob/bob.pipelines/miniconda/lib/python3.7/site-packages/conda_build/environ.py:427: UserWarning: The environment variable 'DOCSERVER' is being passed through with value 'http://www.idiap.ch'. If you are splitting build and test phases with --no-test, please ensure that this value is also set similarly at test time.
UserWarning
/scratch/builds/bob/bob.pipelines/miniconda/lib/python3.7/site-packages/conda_build/environ.py:427: UserWarning: The environment variable 'NOSE_EVAL_ATTR' is being passed through with value ''. If you are splitting build and test phases with --no-test, please ensure that this value is also set similarly at test time.
UserWarning
BUILD START: ['bob.pipelines-0.0.1b0-py37h9f5372d_12.conda']
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
```
However, if I try to install the `bob.pipelines` setup, everything works fine.
```sh
conda install bob-devel==2020.03.30 dask dask-jobqueue bob.extension bob.io.base -c https://www.idiap.ch/software/bob/conda/label/beta/ --dry-run
```
Can I have some light?
Thanks
ping @andre.anjos @amohammadihttps://gitlab.idiap.ch/bob/bob.db.morph/-/issues/1Is it operational?2020-05-22T16:51:53ZTiago de Freitas PereiraIs it operational?Hey @ydayer,
Is this database operational?
ThanksHey @ydayer,
Is this database operational?
Thankshttps://gitlab.idiap.ch/bob/bob.db.meds/-/issues/7Follow-up from "Added missing annotations in a new csv file"2020-04-17T07:30:46ZTiago de Freitas PereiraFollow-up from "Added missing annotations in a new csv file"If you can remove those too....
Thanks
The following discussion from !5 should be addressed:
- [ ] @tiago.pereira started a [discussion](https://gitlab.idiap.ch/bob/bob.db.meds/merge_requests/5#note_51143):
> Hey @ydayer,
> ...If you can remove those too....
Thanks
The following discussion from !5 should be addressed:
- [ ] @tiago.pereira started a [discussion](https://gitlab.idiap.ch/bob/bob.db.meds/merge_requests/5#note_51143):
> Hey @ydayer,
> Do you mind to remove this line.
>
> If you want extra verbosity, do `nosetest -svvv`
>
> thanksYannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.db.meds/-/issues/6Follow-up from "Added missing annotations in a new csv file"2020-04-17T07:30:47ZTiago de Freitas PereiraFollow-up from "Added missing annotations in a new csv file"Hey man,
If you can remove the commentaries, would be great
thanks
The following discussion from !5 should be addressed:
- [ ] @tiago.pereira started a [discussion](https://gitlab.idiap.ch/bob/bob.db.meds/merge_requests/5#note_51144)...Hey man,
If you can remove the commentaries, would be great
thanks
The following discussion from !5 should be addressed:
- [ ] @tiago.pereira started a [discussion](https://gitlab.idiap.ch/bob/bob.db.meds/merge_requests/5#note_51144):
> Remove those too.
> git is good in remembering things :-P
>
> ThanksYannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.db.meds/-/issues/5Annotations nan2020-04-16T16:24:48ZTiago de Freitas PereiraAnnotations nanHey @ydayer,
There are several annotations that are `NaN`
Check a snippet that shows some samples that are Nan in the world set
```python
>>> from bob.db.meds.database import Database
>>> db = Database(protocol="verification_fold1", o...Hey @ydayer,
There are several annotations that are `NaN`
Check a snippet that shows some samples that are Nan in the world set
```python
>>> from bob.db.meds.database import Database
>>> db = Database(protocol="verification_fold1", original_directory="")
>>> samples = [[i, x.key,x.annotations] for i,x in enumerate(db.background_model_samples())]
>>> print(samples[119])
[119, './data/ab/S200-01-t10_01.jpg', {'leye': (nan, nan), 'reye': (nan, nan)}]
>>> # Prining all samples that has Nan annotations
>>> print([[s[0],s[1]] for s in samples if numpy.isnan(s[2]['leye'][0])])
[[82, './data/ab/S131-01-t10_01.jpg'], [112, './data/ab/S180-01-t10_01.jpg'], [119, './data/ab/S200-01-t10_01.jpg'], [139, './data/ac/S237-01-t10_01.jpg'], [144, './data/ac/S245-01-t10_01.jpg'], [170, './data/ac/S290-01-t10_01.jpg'], [207, './data/ad/S362-01-t10_01.jpg']]
```
Do you have some time to look into that?
This invalidates all the experiments we've done so far :-(
Thanks