In our experiments, we used different face recognition models. Among which ArcFace and ElasticFace are integrated in Bob and the code automatically downloads the checkpoints. For other models (such AttentionNet, Swin, etc.) we used [FaceX-Zoo repository](https://github.com/JDAI-CV/FaceX-Zoo). Therefore you need to download checkpoints from this repositpry ([this table](https://github.com/JDAI-CV/FaceX-Zoo/tree/main/training_mode#3-trained-models-and-logs)) and put in a folder with the following structure:
```
├── backbones
│ ├── AttentionNet92
│ │ └── Epoch_17.pt
│ ├── HRNet
│ │ └── Epoch_17.pt
│ ├── RepVGG_B1
│ │ └── Epoch_17.pt
│ └── SwinTransformer_S
│ └── Epoch_17.pt
└── heads
```
You can use other models from FaceX-Zoo and put in this folder with the aforementioned structure.
## Configuring the directories of the datasets
Now that you have downloaded the three databases. You need to set the paths to
those in the configuration files. [Bob](https://www.idiap.ch/software/bob) supports a configuration file
(`~/.bobrc`) in your home directory to specify where the
databases are located. Please specify the paths for the database like below:
```sh
# Setup FFHQ directory
$ bob config set bob.db.ffhq.directory [YOUR_FFHQ_IMAGE_DIRECTORY]
# Setup MOBIO directories
$ bob config set bob.db.mobio.directory [YOUR_MOBIO_IMAGE_DIRECTORY]
$ bob config set bob.db.mobio.annotation_directory [YOUR_MOBIO_ANNOTATION_DIRECTORY]
# Setup LFW directories
$ bob config set bob.db.lfw.directory [YOUR_LFW_IMAGE_DIRECTORY]
$ bob config set bob.bio.face.lfw.annotation_directory [YOUR_LFW_ANNOTATION_DIRECTORY]
# Setup AgeDB directories
$ bob config set bob.db.agedb.directory [YOUR_AGEDB_IMAGE_DIRECTORY]
```
If you use FaceX-Zoo models you need to define the paths to the checkpoints of FaceX-Zoo models too:
```sh
# Setup LFW directories
$ bob config set facexzoo.checkpoints.directory [YOUR_FACEXZOO_CHECKPOINTS_DIRECTORY]
```
## Running the Experiments
### Step 1: Generating Training Dataset
You can use the `GenDataset.py` to generate extract face recognition templates from FFHQ dataset. This data is used in the next step to train the model. For example, for ArcFace you can use the following commands:
```sh
python GenDataset.py --FR_system ArcFace
```
### Step 2: Training face reconstruction model
You can train the face reconstruction model by running `TrainNetwork/train.py`. For example, for inversion attack against `ArcFace` templates protected by `BioHashing` using `ArcFace` in loss function, you can use the following commands:
After the model is trained, you can use it to run evaluation.
For evaluation, you can use `eval/pipeline_attack.py` script and evaluate on an evaluation dataset (MOBIO/LFW/AgeDB). For example, for evaluation of a face reconstruction of ArcFace templates protected by BioHashing on MOBIO dataset, you can use the following commands:
After you ran the evaluation pipeline, you can use `eval_SAR_TMR.py` to caluclate the vulnaribility in terms of Sucess Attack Rate (SAR).
## Contact
For questions or reporting issues to this software package, please contact the first author (hatef.otroshi@idiap.ch) or our development [mailing list](https://www.idiap.ch/software/bob/discuss).