bob.bio.face issueshttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues2023-03-31T14:01:51Zhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/98Entry-points vgg2-*-with-eval not listed in bob.bio.database group2023-03-31T14:01:51ZYannick DAYEREntry-points vgg2-*-with-eval not listed in bob.bio.database groupSome entry-points in `pyproject.toml` (notably `vgg2-short-with-eval` and `vgg2-full-with-eval`) are listed in the entry-points group `bob.bio.config` but not in `bob.bio.database`.
This leads to issues and confusion when passing the co...Some entry-points in `pyproject.toml` (notably `vgg2-short-with-eval` and `vgg2-full-with-eval`) are listed in the entry-points group `bob.bio.config` but not in `bob.bio.database`.
This leads to issues and confusion when passing the config to the `--database` option of `bob bio pipeline simple` and listing with `resources.py`.
We should (if it was not omitted for a reason) also add those config entry-points to the `bob.bio.database` entry-point group.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/97Missing mxnet as dependency2023-03-29T16:59:00ZYannick DAYERMissing mxnet as dependencyWhen running the `arcface-insightface` baseline, an error complains that `mxnet` can not be imported.
After installing manually with `pip install mxnet`, everything works (conda did not manage to install it, though).
`mxnet` is missing...When running the `arcface-insightface` baseline, an error complains that `mxnet` can not be imported.
After installing manually with `pip install mxnet`, everything works (conda did not manage to install it, though).
`mxnet` is missing from the dependencies and dev-profile.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/96Annotation type XYZ is not supported.2023-03-08T10:00:15ZChristophe ECABERTAnnotation type XYZ is not supported.With recent changes to `CSVDataset`, running any baseline against `multipie` database is silently failing with the following message:
```bash
bob.bio.face.utils@2023-02-23 13:54:36,596 -- WARNING: Annotation type ('eyes-center', 'left-p...With recent changes to `CSVDataset`, running any baseline against `multipie` database is silently failing with the following message:
```bash
bob.bio.face.utils@2023-02-23 13:54:36,596 -- WARNING: Annotation type ('eyes-center', 'left-profile', 'right-profile') is not supported. Input images will be fully scaled.
```
There are two reasons for this behaviour:
- The change from `list` to `tuple` for the `MultipieDatabase.annotation_type` variable ([now](https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/master/src/bob/bio/face/database/multipie.py#L114), [then](https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/c8a6495ad85105661efa915e86a7eac7b1c2b3f6/bob/bio/face/database/multipie.py#L127))
- The oversimplified checks in [bob.bio.face.utils.py](https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/master/src/bob/bio/face/utils.py), checking only for list whereas it could potentially be any `Iterable` and not only `list`Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/95Set pytorch to run on single thread only on docker jobs2023-02-20T14:49:51ZYannick DAYERSet pytorch to run on single thread only on docker jobsTo fix the jobs on the docker runners for the CI, I added [this line](https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/d6d8e20bb73cfe4b099fedee603fad6498203d7f/src/bob/bio/face/embeddings/pytorch.py#L27) a while back.
We should:
- [x] r...To fix the jobs on the docker runners for the CI, I added [this line](https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/d6d8e20bb73cfe4b099fedee603fad6498203d7f/src/bob/bio/face/embeddings/pytorch.py#L27) a while back.
We should:
- [x] remove this line as this limits *any* work to a single thread and is not really wanted for performance reasons.
- [x] add `OMP_NUM_THREADS=1` as a variable in the CI config, either on the `.gitlab-ci.yaml` (in `bob/dev-profile`) (if possible, only for the jobs running on docker) or in the runner configuration (if possible?).André MAYORAZAndré MAYORAZhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/94Face cropping based on bounding box still requires facial landmarks / an anno...2023-02-15T15:42:45ZManuel Günthersiebenkopf@googlemail.comFace cropping based on bounding box still requires facial landmarks / an annotatorRelated to #91.
Currently, there is no easy way of cropping the face purely based on bounding boxes, i.e., without alignment based on some facial landmarks. While we have an implementation for this case in `FaceCropBoundingBox`, but thi...Related to #91.
Currently, there is no easy way of cropping the face purely based on bounding boxes, i.e., without alignment based on some facial landmarks. While we have an implementation for this case in `FaceCropBoundingBox`, but this is buggy, see #91: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/d6d8e20bb73cfe4b099fedee603fad6498203d7f/src/bob/bio/face/preprocessor/croppers.py#L312
this is not directly called from within out `face_crop_sover`: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/d6d8e20bb73cfe4b099fedee603fad6498203d7f/src/bob/bio/face/utils.py#L377
but it is only indirectly included in the `BoundingBoxAnnotatorCrop`, which uses this only for cutting out the face, and detect landmarks in the crop: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/d6d8e20bb73cfe4b099fedee603fad6498203d7f/src/bob/bio/face/preprocessor/FaceCrop.py#L305
While this is a useful use-case, another use-case would be to only extract the face based on the bounding box, without further landmark localization and alignment.
Actually, in the previous version of Bob, this was possible through (ab-)using the `FaceEyesNorm` class by providing `topleft` and `bottomright` coordinates instead.
In the current version, this is no longer possible.
I will add back an option for this.Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/93MTCNN comes without Non-Maximum-Suppression (NMS)2023-01-30T16:28:42ZManuel Günthersiebenkopf@googlemail.comMTCNN comes without Non-Maximum-Suppression (NMS)When running our MTCNN face detector, we get a lot of overlapping detections. Typically, these are removed with a non-maximum-suppression algorithm, see for example here: https://github.com/TropComplique/mtcnn-pytorch/blob/45b34462fc995e...When running our MTCNN face detector, we get a lot of overlapping detections. Typically, these are removed with a non-maximum-suppression algorithm, see for example here: https://github.com/TropComplique/mtcnn-pytorch/blob/45b34462fc995e6b8dbd17545b799e8c8a30026b/src/detector.py#L120 or in our TinyFaces implementation: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/de683894f9f14876293ad56390f4c34e7dd83234/src/bob/bio/face/annotator/tinyface.py#L229
However, our MTCNN implementation returns the outputs of the network unfiltered, leading to many overlapping detections: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/de683894f9f14876293ad56390f4c34e7dd83234/src/bob/bio/face/annotator/mtcnn.py#L113
When using only the first annotation as often done in our pipelines, this is not a big issue since NMS would just remove the overlapping boxes. When we need to detect more than one face in an image, on the other hand, we get a lot of repeated detections.
I would recommend to make the NMS function from TinyFaces accessible for other functions, and make use of it in MTCNN as well to filter out overlapping faces.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/92MTCNN models should not be with the code2022-12-21T15:49:25ZYannick DAYERMTCNN models should not be with the codeBig files should not be in a git repository.
TODO:
- Upload the model (`src/bob/bio/face/mtcnn.pb`) on the WebDav server (like https://www.idiap.ch/software/bob/data/bob.bio.face)
- Use the download utility to retrieve the file at runti...Big files should not be in a git repository.
TODO:
- Upload the model (`src/bob/bio/face/mtcnn.pb`) on the WebDav server (like https://www.idiap.ch/software/bob/data/bob.bio.face)
- Use the download utility to retrieve the file at runtime in `bob_data`.Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/91Face cropping by bounding box fails with negative top/left coordinates2023-02-15T11:40:14ZManuel Günthersiebenkopf@googlemail.comFace cropping by bounding box fails with negative top/left coordinatesIn case of negative annotations of the bounding box, cropping the face will result in an error.
Apparently, the range
```
X[...,top:bottom,left:right]
```
will result in a dimension of 0 when top or left is negative, and therefore the ...In case of negative annotations of the bounding box, cropping the face will result in an error.
Apparently, the range
```
X[...,top:bottom,left:right]
```
will result in a dimension of 0 when top or left is negative, and therefore the cropping via OpenCV will fail:
https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/fb8ffece2423465fdbe6325c75845817d4b53a92/bob/bio/face/preprocessor/croppers.py#L390
Please note that the cropping works well for `FaceEyesNorm`, where the corresponding dimensions are padded before extraction: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/fb8ffece2423465fdbe6325c75845817d4b53a92/bob/bio/face/preprocessor/croppers.py#L294
Maybe we can make use of the `FaceEyesNorm` class here instead of trying to do the cropping by hand.
Additionally, in the same line of code, it is assumed that the bounding box has the same aspect ratio as the `self.final_image_size`.
If this is not the case, the facial image will be distorted.
It would be great if we could adapt top/bottom or left/right such that the aspect ratio of the target size is kept (as far as possible, despite rounding issues).https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/90Where went `CSVDatasetZTNorm` ?2023-02-20T08:45:24ZChristophe ECABERTWhere went `CSVDatasetZTNorm` ?With recent `csv-dataset` refactoring, some derived classes have been left on the side. With changes partially pushed to `bob` a simple import like `from bob.bio.face.database import MobioDatabase` will leads to:
```
ImportError: canno...With recent `csv-dataset` refactoring, some derived classes have been left on the side. With changes partially pushed to `bob` a simple import like `from bob.bio.face.database import MobioDatabase` will leads to:
```
ImportError: cannot import name 'CSVDatasetZTNorm' from 'bob.bio.base.database'
```
Which is quite inconvenient.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/89Switch to new CI/CD configuration2022-12-16T17:33:32ZYannick DAYERSwitch to new CI/CD configurationWe need to adapt this package to the new CI/CD and package format using citools:
- [x] Modify `pyproject.toml`:
- [x] Add information from `setup.py`,
- [x] Add version from `version.txt`,
- [x] Add requirements from `requir...We need to adapt this package to the new CI/CD and package format using citools:
- [x] Modify `pyproject.toml`:
- [x] Add information from `setup.py`,
- [x] Add version from `version.txt`,
- [x] Add requirements from `requirements.txt` and `conda/meta.yaml`,
- [x] Empty `setup.py`:
- Leave the call to `setup()` for compatibility,
- [x] Remove `version.txt`,
- [x] Remove `requirements.txt`,
- [x] Modify `conda/meta.yaml`,
- [x] Import data from `pyproject.toml` (`name`, `version`, ...),
- [x] Add the `source.path` field with value `..`,
- [x] Add the `build.noarch` field with value `python`,
- [x] Edit the `build.script` to only contain `"{{ PYTHON }} -m pip install {{ SRC_DIR }} -vv"`,
- [x] Remove test and documentation commands and comments,
- [x] Modify `.gitlab-ci.yml` to point to citools' `python.yml`,
- Use the fields format instead of the URL,
- [x] Move files to follow the `src` layout:
- [x] the whole `bob` folder to `src/bob/`,
- [x] all the tests in `tests/`,
- [x] the test data files in `tests/data`,
- [x] Edit the tests to load the data correctly, either with `os.path.join(os.path.basename(__file__), "data/xxx.txt")` or `pkg_resources.resource_filename(__name__, "data/xxx.txt")`,
- [x] Activate the `packages` option in `settings -> general -> visibility` in the Gitlab project,
- [x] Edit the latest doc badges to point to the `sphinx` directory in `doc/[...]/master`:
- [x] in README.md,
- [x] in the GitLab project settings,
- [x] Edit the coverage badges to point to the doc's coverage directory:
- [x] in README.md,
- [x] in the GitLab project settings,
- [ ] Ensure the CI pipeline passes.
You can look at [bob.learn.em](https://gitlab.idiap.ch/bob/bob.learn.em) for an example of a ported package.Roadmap to the major version of Bob 12https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/88Scale function on preprocessor/Scaler.py cannot handle variable input shapes2022-09-30T13:16:24ZLuis LUEVANOScale function on preprocessor/Scaler.py cannot handle variable input shapesWhen running verification without annotations, the scale function of Scaler.py is used. However, it does not handle scaling for input images from different shapes in the same SampleBatch. In the scale function, the check_array processes ...When running verification without annotations, the scale function of Scaler.py is used. However, it does not handle scaling for input images from different shapes in the same SampleBatch. In the scale function, the check_array processes the SampleBatch and it assumes the shape of the first image in the batch as the one for the rest of the images in the same batch; when the shapes are different it throws an exception.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/87Adding Model Complexity Measurements2022-09-22T10:26:58ZPasra RahimiAdding Model Complexity MeasurementsI think we should introduce a couple of model complexity measurements (in sense of a number of parameters, execution time, FLOPS, ... ) to the pipelines ...
This will be hard especially in the case of execution time since the infrastru...I think we should introduce a couple of model complexity measurements (in sense of a number of parameters, execution time, FLOPS, ... ) to the pipelines ...
This will be hard especially in the case of execution time since the infrastructure at this point to my best understanding is not normalized.
Let met know your comments.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/86Adding PFC2022-09-22T11:53:37ZPasra RahimiAdding PFCI will try to add the PFC (With ViT backbone) to the repo, if possible, please assign me ...I will try to add the PFC (With ViT backbone) to the repo, if possible, please assign me ...Pasra RahimiPasra Rahimihttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/85Formatting: output for compare_samples diagonal is not zero2022-09-12T16:20:16ZLuis LUEVANOFormatting: output for compare_samples diagonal is not zeroThe output for the compare_samples command is not zero when showing the diagonal of the All vs All comparison in all pipelines.
Bad formatting example with mobilefacenet pipeline:
```
All vs All comparison
------------------ ---------...The output for the compare_samples command is not zero when showing the diagonal of the All vs All comparison in all pipelines.
Bad formatting example with mobilefacenet pipeline:
```
All vs All comparison
------------------ -----------------------
./me.jpg ./not_me.jpg
-0.0 -0.9227539984332366
-0.922753991574597 -3.5416114485542494e-14
------------------ -----------------------
```
However it is correct with resnet50-msceleb-arcface-2021 pipeline:
```
All vs All comparison
----------------- -----------------
./me.jpg ./not_me.jpg
-0.0 -1.03846231201703
-1.03846231201703 -0.0
----------------- -----------------
```
So far I have only tested a few pipelines
- Bad formatting: facenet_sanderberg, arcface-insightface, mobilefacenet
- Correct formatting: resnet50-msceleb-arcface-2021, resnet50-msceleb-arcface20210521https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/84[RFC] PyTorchModel2023-01-25T08:30:14ZChristophe ECABERT[RFC] PyTorchModelWith current implementation of the `PyTorchModel` the weights and the architecture need to be provided through `checkpoint_path` and `config` in order to use the transformer. This constraint can be alleviate using the `TorchScript` scrip...With current implementation of the `PyTorchModel` the weights and the architecture need to be provided through `checkpoint_path` and `config` in order to use the transformer. This constraint can be alleviate using the `TorchScript` script feature [ref](https://pytorch.org/docs/stable/jit.html).
TorchScript convert any `torch.nn.Module` into a persistent executable that can be loaded and used directly without needing to build the architecture first. It basically saves the "code" and the weights into a single file in the same fashion as Tensorflow. Moreover, some optimizations can be turned on during the saving phase such as converting all the ops into constant ops, freezing graph and so on.
Such mechanism can be used in the `PyTorchModel` base class to greatly simplify how we add new models. An example is provided below:
```python
class TorchScriptModel(TransformerMixin, BaseEstimator):
def __init__(self,
model_path,
preprocessor,
memory_demanding=False,
device=None,
**kwargs):
super().__init__(**kwargs)
self.memory_demanding = memory_demanding
# Model
self.model_path = model_path
self.model = None
self.preprocessor = preprocessor
if device is None:
device = pt.device('cpu')
self.device = device
def _model(self):
if self.model is None:
# For now, we suggest to disable the Jit Autocast Pass,
# As the issue: https://github.com/pytorch/pytorch/issues/75956
pt._C._jit_set_autocast_mode(False)
self.model = pt.jit.load(self.model_path)
self.model.eval()
self.model.to(self.device)
return self.model
def transform(self, X):
X = check_array(X, allow_nd=True).astype(np.float32)
model = self._model()
def _transform(x):
x = pt.from_numpy(x)
with pt.no_grad():
# Preprocess
x = x.to(self.device)
x = self.preprocessor(x)
# Extract embedding
x = model(x)
return x.cpu().numpy()
if self.memory_demanding:
features = []
for x in X:
f = _transform(x[None, ...])
features.append(f)
features = np.asarray(features)
if features.ndim >= 3:
features = np.vstack(features)
return features
else:
return _transform(X)
def __getstate__(self):
# Handling unpicklable objects
d = {}
for key, value in self.__dict__.items():
if key != 'model':
d[key] = value
d['model'] = None
return d
def _more_tags(self):
return {"requires_fit": False}
```
Moreover, some architecture are defined as a derived class of `PyTorchModel` such as `IResnetXXX` whereas others are defined through function returning a `PipelineSimple` instance. The consistency could be improved for instance by using `classmethod` to create a certain type of architecture. For instance it could be something like:
```python
class TorchScriptModel(TransformerMixin, BaseEstimator):
@classmethod
def IResNet(cls, version: Enum):
# Mechanic to retrive model from idiap server
return cls(...)
```
What are your thoughts on this @ydayer, @flavio.tarsetti, @lcolbois ?https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/83Leaderboad needs to point to bob.bio.face_ongoing's temporarily2022-08-10T10:27:03ZYannick DAYERLeaderboad needs to point to bob.bio.face_ongoing's temporarilyWhile the leaderboard is being re-built, we should point to the old existing one in bob.bio.face_ongoing [here](https://www.idiap.ch/software/bob/docs/bob/bob.bio.face_ongoing/master/leaderboard.html).
or include it in this doc, since t...While the leaderboard is being re-built, we should point to the old existing one in bob.bio.face_ongoing [here](https://www.idiap.ch/software/bob/docs/bob/bob.bio.face_ongoing/master/leaderboard.html).
or include it in this doc, since the command lines shown there are no longer working.Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/82RFW dataset: overlapping and mis-labelling between training and testing sets2022-07-13T13:01:02ZYu LinghuRFW dataset: overlapping and mis-labelling between training and testing setsBased on the datasets we received from Wang et al., when we use z-samples or t-samples as shown below
https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/master/bob/bio/face/database/rfw.py#L242, 2 problems occurred during the experiments.
...Based on the datasets we received from Wang et al., when we use z-samples or t-samples as shown below
https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/master/bob/bio/face/database/rfw.py#L242, 2 problems occurred during the experiments.
1. There are 44 subjects classified as Caucasian in the training set, but as Indian in the testing set. (e.g. m.0c96fs, m.08y5xt, etc.)
2. When we choose to obtain 2500 z-samples from each race as the cohort, we detect more than 6000 pairs of subjects (one from training and one from testing) that have very high similarity scores (-0.5~-0.1). After manually check some of them, those samples should belong to same person, i.e. not imposter scores. So the overlapping exists between training and testing sets, which is not supposed to be.
This bug report works as a record of problems. I'm not sure if those problems only happen to us because of different versions of datasets.
We could discuss it in a later stage, e.g. use other BUPT datasets like BUPT-Balanced as training set since Wang et al. stated there is no overlap between BUPT-Balanced and RFW, face detection might be necessary since no landmark is given for this dataset.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/81MTCNN is not serializable2022-06-07T12:10:09ZChristophe ECABERTMTCNN is not serializableWith the current implementation, MTCNN is not serializable via pickle. When used locally, it is not an issue, however, when running on the cluster, we are getting lovely messages into workers' logs such as:
```
distributed.protocol.pick...With the current implementation, MTCNN is not serializable via pickle. When used locally, it is not an issue, however, when running on the cluster, we are getting lovely messages into workers' logs such as:
```
distributed.protocol.pickle - INFO - Failed to serialize CheckpointWrapper(estimator=SampleWrapper(estimator=BoundingBoxAnnotatorCrop(annotator=MTCNN(thresholds=(0.1,
0.2,
0.2)),
eyes_cropper=FaceEyesNorm(final_image_size=(112,
112),
reference_eyes_location={'bottomright': (112,
112),
'leye': (55,
72),
'reye': (55,
40),
'topleft': (0,
0)})),
fit_extra_arguments=(),
input_attribute='data',
output_attribute='data',
transform_...',
'annotations'),)),
extension='.h5',
features_dir='/idiap/temp/cecabert/experiments/bob101/results-sge/ijbc/cropper',
hash_fn=<function hash_string at 0x7fa50d891000>,
load_func=<function load at 0x7fa507f5ed40>,
model_path='/idiap/temp/cecabert/experiments/bob101/results-sge/ijbc/cropper.pkl',
sample_attribute='data',
save_func=<function save at 0x7fa507f5ef80>). Exception: cannot pickle '_thread.RLock' object
distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
File "/remote/idiap.svm/temp.biometric03/cecabert/mambaforge/envs/bob_deps/lib/python3.10/site-packages/distributed/protocol/core.py", line 111, in loads
return msgpack.loads(
File "msgpack/_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb
File "/remote/idiap.svm/temp.biometric03/cecabert/mambaforge/envs/bob_deps/lib/python3.10/site-packages/distributed/protocol/core.py", line 103, in _decode_default
return merge_and_deserialize(
File "/remote/idiap.svm/temp.biometric03/cecabert/mambaforge/envs/bob_deps/lib/python3.10/site-packages/distributed/protocol/serialize.py", line 488, in merge_and_deserialize
return deserialize(header, merged_frames, deserializers=deserializers)
File "/remote/idiap.svm/temp.biometric03/cecabert/mambaforge/envs/bob_deps/lib/python3.10/site-packages/distributed/protocol/serialize.py", line 417, in deserialize
return loads(header, frames)
File "/remote/idiap.svm/temp.biometric03/cecabert/mambaforge/envs/bob_deps/lib/python3.10/site-packages/distributed/protocol/serialize.py", line 180, in serialization_error_loads
raise TypeError(msg)
TypeError: Could not serialize object of type CheckpointWrapper.
Traceback (most recent call last):
File "/remote/idiap.svm/temp.biometric03/cecabert/mambaforge/envs/bob_deps/lib/python3.10/site-packages/distributed/protocol/pickle.py", line 40, in dumps
result = pickle.dumps(x, **dump_kwargs)
AttributeError: Can't pickle local object 'WeakSet.__init__.<locals>._remove'
```
The issue it that in some case, it tries to serialize the already loaded underlying Tensorflow Graph. This can be solved with the same mechanism used in `PyTorchModel` class by overriding the `__getstate__` method as follow:
```python
def __getstate__(self):
# Handling unpicklable objects
state = {}
for key, value in super().__getstate__().items():
if key != '_fun':
state[key] = value
state['_fun'] = None
return state
```
With this change, the serialization now works properly and can be tested with:
```python
mtcnn = MTCNN()
mtcnn.mtcnn_fun # Force instantiation of TF graph
other = pickle.loads(pickle.dumps(mtcnn))
# No AttributeError: Can't pickle local object 'WeakSet.__init__.<locals>._remove'
# TF graph will be lazily initialized if needed
```https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/80Cannot specify original directory and extension for most of the databases any...2022-05-25T09:16:16ZManuel Günthersiebenkopf@googlemail.comCannot specify original directory and extension for most of the databases anymoreWhile in previous database implementations, it was relatively straightforward to utilize the database interface in order to load pre-extracted features, in the new database interfaces this is no longer possible, for two reasons:
1. Ther...While in previous database implementations, it was relatively straightforward to utilize the database interface in order to load pre-extracted features, in the new database interfaces this is no longer possible, for two reasons:
1. There is no simple way of providing the database interface with the directory where to read the data from. For example, in an old interface (LFW), we can still set:https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/c7ee7213f83f62b1e36685290e1defd10fea2c20/bob/bio/face/database/lfw.py#L90, while this option does no longer exist in newer interfaces: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/c7ee7213f83f62b1e36685290e1defd10fea2c20/bob/bio/face/database/scface.py#L30, although it should be relatively straightforward to implement that since the default value is used just a few lines below: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/c7ee7213f83f62b1e36685290e1defd10fea2c20/bob/bio/face/database/scface.py#L47. It should be simple to re-expose this option to the user.
2. The filename extension is by default empty: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/c7ee7213f83f62b1e36685290e1defd10fea2c20/bob/bio/face/database/scface.py#L50, and there is no possibility to change that on the constructor. But even if we would expose the extension similarly to the directory name, we would still be in trouble since the default loader just **appends** the extension rather than ** replacing** it: https://gitlab.idiap.ch/bob/bob.bio.base/-/blob/406f2da1faadacd4d4fe4e36e5a0010d78557513/bob/bio/base/database/csv_dataset.py#L145
The main issue with 2. is that someone has decided to ignore our old behavior to store keys without filename extension, and just added the original extension to the key. For example, running:
```
import bob.bio.face
db = bob.bio.face.database.SCFaceDatabase(protocol="far")
samples = db.all_samples()
print(samples[0].key)
```
will print `filename.JPG` instead of `filename` (without extension).
So, what I would propose is (the least amount of changes, anything else would require to re-create all the CSV-based databases):
- [ ] Expose the `original_directory` and `original_extension` parameters for all databases to the user, keeping their default values as they currently are.
- [ ] Change the line in https://gitlab.idiap.ch/bob/bob.bio.base/-/blob/406f2da1faadacd4d4fe4e36e5a0010d78557513/bob/bio/base/database/csv_dataset.py#L145 to replace the extension with the given one (if one is given) rather than appending it to the filename.
Any objection?https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/79background samples in LFW2022-04-13T15:08:39ZHatef OTROSHIbackground samples in LFWHi,
As there is no background sample in LFW, it seems that the `background_model_samples` needs to be an empty list, but it is `[0]` now in [bob/bio/face/database/lfw.py#L339](https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/master/bob/...Hi,
As there is no background sample in LFW, it seems that the `background_model_samples` needs to be an empty list, but it is `[0]` now in [bob/bio/face/database/lfw.py#L339](https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/master/bob/bio/face/database/lfw.py#L339)
ping @tiago.pereiraTiago de Freitas PereiraTiago de Freitas Pereira