bob.bio.face issueshttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues2017-10-20T02:30:32Zhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/20Using super() for base class function calls2017-10-20T02:30:32ZManuel Günthersiebenkopf@googlemail.comUsing super() for base class function callsAfter bob.bio.base#64 is merged, we can go ahead and use super() to call base class functionality here, too. This mainly should affect constructor calls.After bob.bio.base#64 is merged, we can go ahead and use super() to call base class functionality here, too. This mainly should affect constructor calls.May 2017 Hackathonhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/98Entry-points vgg2-*-with-eval not listed in bob.bio.database group2023-03-31T14:01:51ZYannick DAYEREntry-points vgg2-*-with-eval not listed in bob.bio.database groupSome entry-points in `pyproject.toml` (notably `vgg2-short-with-eval` and `vgg2-full-with-eval`) are listed in the entry-points group `bob.bio.config` but not in `bob.bio.database`.
This leads to issues and confusion when passing the co...Some entry-points in `pyproject.toml` (notably `vgg2-short-with-eval` and `vgg2-full-with-eval`) are listed in the entry-points group `bob.bio.config` but not in `bob.bio.database`.
This leads to issues and confusion when passing the config to the `--database` option of `bob bio pipeline simple` and listing with `resources.py`.
We should (if it was not omitted for a reason) also add those config entry-points to the `bob.bio.database` entry-point group.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/97Missing mxnet as dependency2023-03-29T16:59:00ZYannick DAYERMissing mxnet as dependencyWhen running the `arcface-insightface` baseline, an error complains that `mxnet` can not be imported.
After installing manually with `pip install mxnet`, everything works (conda did not manage to install it, though).
`mxnet` is missing...When running the `arcface-insightface` baseline, an error complains that `mxnet` can not be imported.
After installing manually with `pip install mxnet`, everything works (conda did not manage to install it, though).
`mxnet` is missing from the dependencies and dev-profile.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/94Face cropping based on bounding box still requires facial landmarks / an anno...2023-02-15T15:42:45ZManuel Günthersiebenkopf@googlemail.comFace cropping based on bounding box still requires facial landmarks / an annotatorRelated to #91.
Currently, there is no easy way of cropping the face purely based on bounding boxes, i.e., without alignment based on some facial landmarks. While we have an implementation for this case in `FaceCropBoundingBox`, but thi...Related to #91.
Currently, there is no easy way of cropping the face purely based on bounding boxes, i.e., without alignment based on some facial landmarks. While we have an implementation for this case in `FaceCropBoundingBox`, but this is buggy, see #91: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/d6d8e20bb73cfe4b099fedee603fad6498203d7f/src/bob/bio/face/preprocessor/croppers.py#L312
this is not directly called from within out `face_crop_sover`: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/d6d8e20bb73cfe4b099fedee603fad6498203d7f/src/bob/bio/face/utils.py#L377
but it is only indirectly included in the `BoundingBoxAnnotatorCrop`, which uses this only for cutting out the face, and detect landmarks in the crop: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/d6d8e20bb73cfe4b099fedee603fad6498203d7f/src/bob/bio/face/preprocessor/FaceCrop.py#L305
While this is a useful use-case, another use-case would be to only extract the face based on the bounding box, without further landmark localization and alignment.
Actually, in the previous version of Bob, this was possible through (ab-)using the `FaceEyesNorm` class by providing `topleft` and `bottomright` coordinates instead.
In the current version, this is no longer possible.
I will add back an option for this.Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/93MTCNN comes without Non-Maximum-Suppression (NMS)2023-01-30T16:28:42ZManuel Günthersiebenkopf@googlemail.comMTCNN comes without Non-Maximum-Suppression (NMS)When running our MTCNN face detector, we get a lot of overlapping detections. Typically, these are removed with a non-maximum-suppression algorithm, see for example here: https://github.com/TropComplique/mtcnn-pytorch/blob/45b34462fc995e...When running our MTCNN face detector, we get a lot of overlapping detections. Typically, these are removed with a non-maximum-suppression algorithm, see for example here: https://github.com/TropComplique/mtcnn-pytorch/blob/45b34462fc995e6b8dbd17545b799e8c8a30026b/src/detector.py#L120 or in our TinyFaces implementation: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/de683894f9f14876293ad56390f4c34e7dd83234/src/bob/bio/face/annotator/tinyface.py#L229
However, our MTCNN implementation returns the outputs of the network unfiltered, leading to many overlapping detections: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/de683894f9f14876293ad56390f4c34e7dd83234/src/bob/bio/face/annotator/mtcnn.py#L113
When using only the first annotation as often done in our pipelines, this is not a big issue since NMS would just remove the overlapping boxes. When we need to detect more than one face in an image, on the other hand, we get a lot of repeated detections.
I would recommend to make the NMS function from TinyFaces accessible for other functions, and make use of it in MTCNN as well to filter out overlapping faces.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/92MTCNN models should not be with the code2022-12-21T15:49:25ZYannick DAYERMTCNN models should not be with the codeBig files should not be in a git repository.
TODO:
- Upload the model (`src/bob/bio/face/mtcnn.pb`) on the WebDav server (like https://www.idiap.ch/software/bob/data/bob.bio.face)
- Use the download utility to retrieve the file at runti...Big files should not be in a git repository.
TODO:
- Upload the model (`src/bob/bio/face/mtcnn.pb`) on the WebDav server (like https://www.idiap.ch/software/bob/data/bob.bio.face)
- Use the download utility to retrieve the file at runtime in `bob_data`.Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/91Face cropping by bounding box fails with negative top/left coordinates2023-02-15T11:40:14ZManuel Günthersiebenkopf@googlemail.comFace cropping by bounding box fails with negative top/left coordinatesIn case of negative annotations of the bounding box, cropping the face will result in an error.
Apparently, the range
```
X[...,top:bottom,left:right]
```
will result in a dimension of 0 when top or left is negative, and therefore the ...In case of negative annotations of the bounding box, cropping the face will result in an error.
Apparently, the range
```
X[...,top:bottom,left:right]
```
will result in a dimension of 0 when top or left is negative, and therefore the cropping via OpenCV will fail:
https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/fb8ffece2423465fdbe6325c75845817d4b53a92/bob/bio/face/preprocessor/croppers.py#L390
Please note that the cropping works well for `FaceEyesNorm`, where the corresponding dimensions are padded before extraction: https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/fb8ffece2423465fdbe6325c75845817d4b53a92/bob/bio/face/preprocessor/croppers.py#L294
Maybe we can make use of the `FaceEyesNorm` class here instead of trying to do the cropping by hand.
Additionally, in the same line of code, it is assumed that the bounding box has the same aspect ratio as the `self.final_image_size`.
If this is not the case, the facial image will be distorted.
It would be great if we could adapt top/bottom or left/right such that the aspect ratio of the target size is kept (as far as possible, despite rounding issues).https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/88Scale function on preprocessor/Scaler.py cannot handle variable input shapes2022-09-30T13:16:24ZLuis LUEVANOScale function on preprocessor/Scaler.py cannot handle variable input shapesWhen running verification without annotations, the scale function of Scaler.py is used. However, it does not handle scaling for input images from different shapes in the same SampleBatch. In the scale function, the check_array processes ...When running verification without annotations, the scale function of Scaler.py is used. However, it does not handle scaling for input images from different shapes in the same SampleBatch. In the scale function, the check_array processes the SampleBatch and it assumes the shape of the first image in the batch as the one for the rest of the images in the same batch; when the shapes are different it throws an exception.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/87Adding Model Complexity Measurements2022-09-22T10:26:58ZPasra RahimiAdding Model Complexity MeasurementsI think we should introduce a couple of model complexity measurements (in sense of a number of parameters, execution time, FLOPS, ... ) to the pipelines ...
This will be hard especially in the case of execution time since the infrastru...I think we should introduce a couple of model complexity measurements (in sense of a number of parameters, execution time, FLOPS, ... ) to the pipelines ...
This will be hard especially in the case of execution time since the infrastructure at this point to my best understanding is not normalized.
Let met know your comments.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/86Adding PFC2022-09-22T11:53:37ZPasra RahimiAdding PFCI will try to add the PFC (With ViT backbone) to the repo, if possible, please assign me ...I will try to add the PFC (With ViT backbone) to the repo, if possible, please assign me ...Pasra RahimiPasra Rahimihttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/85Formatting: output for compare_samples diagonal is not zero2022-09-12T16:20:16ZLuis LUEVANOFormatting: output for compare_samples diagonal is not zeroThe output for the compare_samples command is not zero when showing the diagonal of the All vs All comparison in all pipelines.
Bad formatting example with mobilefacenet pipeline:
```
All vs All comparison
------------------ ---------...The output for the compare_samples command is not zero when showing the diagonal of the All vs All comparison in all pipelines.
Bad formatting example with mobilefacenet pipeline:
```
All vs All comparison
------------------ -----------------------
./me.jpg ./not_me.jpg
-0.0 -0.9227539984332366
-0.922753991574597 -3.5416114485542494e-14
------------------ -----------------------
```
However it is correct with resnet50-msceleb-arcface-2021 pipeline:
```
All vs All comparison
----------------- -----------------
./me.jpg ./not_me.jpg
-0.0 -1.03846231201703
-1.03846231201703 -0.0
----------------- -----------------
```
So far I have only tested a few pipelines
- Bad formatting: facenet_sanderberg, arcface-insightface, mobilefacenet
- Correct formatting: resnet50-msceleb-arcface-2021, resnet50-msceleb-arcface20210521https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/82RFW dataset: overlapping and mis-labelling between training and testing sets2022-07-13T13:01:02ZYu LinghuRFW dataset: overlapping and mis-labelling between training and testing setsBased on the datasets we received from Wang et al., when we use z-samples or t-samples as shown below
https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/master/bob/bio/face/database/rfw.py#L242, 2 problems occurred during the experiments.
...Based on the datasets we received from Wang et al., when we use z-samples or t-samples as shown below
https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/master/bob/bio/face/database/rfw.py#L242, 2 problems occurred during the experiments.
1. There are 44 subjects classified as Caucasian in the training set, but as Indian in the testing set. (e.g. m.0c96fs, m.08y5xt, etc.)
2. When we choose to obtain 2500 z-samples from each race as the cohort, we detect more than 6000 pairs of subjects (one from training and one from testing) that have very high similarity scores (-0.5~-0.1). After manually check some of them, those samples should belong to same person, i.e. not imposter scores. So the overlapping exists between training and testing sets, which is not supposed to be.
This bug report works as a record of problems. I'm not sure if those problems only happen to us because of different versions of datasets.
We could discuss it in a later stage, e.g. use other BUPT datasets like BUPT-Balanced as training set since Wang et al. stated there is no overlap between BUPT-Balanced and RFW, face detection might be necessary since no landmark is given for this dataset.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/77Output of `dataset.all_samples` is inconsistent2022-03-10T18:55:22ZManuel Günthersiebenkopf@googlemail.comOutput of `dataset.all_samples` is inconsistentThe output of the method `all_samples` of different databases returns different things. While the default `CSVDataset` https://gitlab.idiap.ch/bob/bob.bio.base/-/blob/997e6d6dda44c928c1792518a2b625726efde0e1/bob/bio/base/database/csv_dat...The output of the method `all_samples` of different databases returns different things. While the default `CSVDataset` https://gitlab.idiap.ch/bob/bob.bio.base/-/blob/997e6d6dda44c928c1792518a2b625726efde0e1/bob/bio/base/database/csv_dataset.py#L744 returns a list of `Sample` (more precisely a list of `DelayedSample`), some other datasets implemented in here return a list of `SampleSet`. Examples are:
* https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/38a910ac1df0ba14e8262f957ae0e666a3e2f616/bob/bio/face/database/ijbc.py#L296
* https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/38a910ac1df0ba14e8262f957ae0e666a3e2f616/bob/bio/face/database/rfw.py#L424
* https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/38a910ac1df0ba14e8262f957ae0e666a3e2f616/bob/bio/face/database/gbu.py#L238
But I am sure that I was missing some.
Is there any plan in changing this inconsistency? The name of the function suggests to extract a list of `Sample`, so we would likely want to adapt the implementations of the datasets listed here...https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/76Resource names for databases not listed anywhere2022-01-19T17:04:16ZManuel Günthersiebenkopf@googlemail.comResource names for databases not listed anywhereCurrently, there is no documentation on how to setup databases.
Particularly, it is mentioned nowhere what are the resource names that need to be set in order to get the databases right.
Ideally, a script that would provide any resource...Currently, there is no documentation on how to setup databases.
Particularly, it is mentioned nowhere what are the resource names that need to be set in order to get the databases right.
Ideally, a script that would provide any resource keys automatically would be very helpful.
But AFAIK such a script does not exist and is not easily to be implemented.
Hence, a manual list of parameters, which should be added to the documentation, should be added.
Finally, some parameters of some databases are non-standard.
For example, ARface has a fixed `.ppm` extension, but since ARface can be downloaded in raw format and converted to any other format (I have used `.png`, for example), there should be options to change those, too.https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/74Let's talk about face alignment2022-04-25T16:18:31ZTiago de Freitas PereiraLet's talk about face alignmentHi guys,
In Bob we have a 10 years old standard for face-alignment; in short we geometrically normalize the face using a set of affine transformations using the eyes as reference.
Some packages (like the face zoo and the deep-insight) ...Hi guys,
In Bob we have a 10 years old standard for face-alignment; in short we geometrically normalize the face using a set of affine transformations using the eyes as reference.
Some packages (like the face zoo and the deep-insight) does a very similar job, but with an automatic landmark detector on front.
As far as I could see, they have a code to find the best affine transformation to match the automatically detected landmarks to 25 reference points of the face.
https://github.com/JDAI-CV/FaceX-Zoo/blob/5b63794ba7649fe78a29d2ce0d0216c7773f6174/face_sdk/core/image_cropper/arcface_cropper/FaceRecImageCropper.py#L101
Any thoughts on that?
Shall we put this in place?
I think this analysis of alignment makes at least a small report (if not a conference paper).
ping @mguenther @amohammadi @ageorgehttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/66LFW directories are non-standard2022-02-03T12:53:53ZManuel Günthersiebenkopf@googlemail.comLFW directories are non-standardIt appears that the directory structure for the LFW dataset is set to an idiap-specific location. By default, there is no relative path like this:
https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/ead8c069bafb4024dc15c5df7fdc878aec8bd5f0/b...It appears that the directory structure for the LFW dataset is set to an idiap-specific location. By default, there is no relative path like this:
https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/ead8c069bafb4024dc15c5df7fdc878aec8bd5f0/bob/bio/face/database/lfw.py#L87
When downloading the images from the LFW web page, they get extracted into directories `lfw` or `lfw_funneled` (for the funneled version). In my eyes it would be more sensible to used these directories instead, i.e., use `lfw` when no annotation directory is specified, and use `lfw_funneled` when annotations are provided.
Another way would be to have two distinct entries for `lfw-view2`, such as `lfw-view2-aligned` (funneled images and annotations) and `lfw-view2-raw` (original images, use detectors).https://gitlab.idiap.ch/bob/bob.bio.face/-/issues/62[SCFace] Fix channel metadata in `listing.csv`2021-08-26T13:02:59ZLaurent COLBOIS[SCFace] Fix channel metadata in `listing.csv`For myself. In the CSV protocols, cameras 6, 7 have channel `RGB` assigned to them, where it should be `IR`.
This is not dramatic as this metadata is not used in the pipelines, however it should be fixed for consistency.
Moreover, it can...For myself. In the CSV protocols, cameras 6, 7 have channel `RGB` assigned to them, where it should be `IR`.
This is not dramatic as this metadata is not used in the pipelines, however it should be fixed for consistency.
Moreover, it can lead to mistakes when designing new protocols using the `listing.csv`.Laurent COLBOISLaurent COLBOIShttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/60Add the possibility to pass a pre-loaded model into embedding base classes.2022-01-19T15:48:19ZManuel Günthersiebenkopf@googlemail.comAdd the possibility to pass a pre-loaded model into embedding base classes.In some cases, deep learning models are instantiated inside of other packages, and it is not possible to pass the paths of the model to be loaded inside of the classes. For these cases, we should add a `model` parameter to the base class...In some cases, deep learning models are instantiated inside of other packages, and it is not possible to pass the paths of the model to be loaded inside of the classes. For these cases, we should add a `model` parameter to the base class constructors, and simply store the models internally, i.e.:
https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/5c0811270bc6129df64cc3a0ef10c35c64010b65/bob/bio/face/embeddings/pytorch.py#L42
https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/5c0811270bc6129df64cc3a0ef10c35c64010b65/bob/bio/face/embeddings/mxnet.py#L33
https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/5c0811270bc6129df64cc3a0ef10c35c64010b65/bob/bio/face/embeddings/tensorflow.py#L53
As the `transform` function always checks if the model is loaded, no further adaptations need to be done. Test code should be implemented, though.Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/59Introduce usage documentation for deep learning modules2022-01-19T15:48:19ZManuel Günthersiebenkopf@googlemail.comIntroduce usage documentation for deep learning modulesThe two students who introduced the deep learning frameworks also wrote some documentation on how to use their new modules. For some reason, these did not make it into the current package, but I have a copy of it.
Since the structure of...The two students who introduced the deep learning frameworks also wrote some documentation on how to use their new modules. For some reason, these did not make it into the current package, but I have a copy of it.
Since the structure of the modules has changed, we might need to update the documentation accordingly.Manuel Günthersiebenkopf@googlemail.comManuel Günthersiebenkopf@googlemail.comhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/56Follow-up from "Feature extractors"2021-06-16T14:37:01ZTiago de Freitas PereiraFollow-up from "Feature extractors"The following discussion from !112 should be addressed:
- [ ] @mguenther started a [discussion](https://gitlab.idiap.ch/bob/bob.bio.face/-/merge_requests/112#note_63485): (+5 comments)
> I do not think that this is a good idea to ...The following discussion from !112 should be addressed:
- [ ] @mguenther started a [discussion](https://gitlab.idiap.ch/bob/bob.bio.face/-/merge_requests/112#note_63485): (+5 comments)
> I do not think that this is a good idea to have a default cropping for faces here, since all networks require a different cropping. Instead of using some obscure default cropping which might generate very bad features for the given network, I would rather recommend to raise an exception in case that we do not have eye locations.
>
> Note also that the `dnn_default_cropping` does not seem to be imported in this file.
Raise a warning during face-cropTiago de Freitas PereiraTiago de Freitas Pereira