Skip to content
Snippets Groups Projects
user avatar
Hatef OTROSHI authored
16cb7f39
History

Inversion of Deep Facial Templates using Synthetic Data

This package is part of the signal-processing and machine learning toolbox Bob. It contains the source code to reproduce the following paper:

@inproceedings{ijcb2023faceti,
  title={Inversion of Deep Facial Templates using Synthetic Data},
  author={Shahreza, Hatef Otroshi and Marcel, S{\'e}bastien},
  booktitle={2023 IEEE International Joint Conference on Biometrics (IJCB)},
  pages={1--8},
  year={2023},
  organization={IEEE}
}

Link to Paper

Installation

The installation instructions are based on conda and works on Linux systems only. Therefore, please install conda before continuing.

For installation, please download the source code of this paper and unpack it. Then, you can create a conda environment with the following command:

$ git clone https://gitlab.idiap.ch/bob/bob.paper.ijcb2023_face_ti
$ cd bob.paper.ijcb2023_face_ti

# create the environment
$ conda create --name bob.paper.ijcb2023_face_ti --file package-list.txt
# or 
# $ conda env create -f environment.yml

# activate the environment
$ conda activate bob.paper.ijcb2023_face_ti

# install paper package
$ pip install ./ --no-build-isolation  

We use StyleGAN3 as a pretrained face generator network. Therefore, you need to clone its git repository and download available pretrained model:

$ git clone https://github.com/NVlabs/stylegan3.git

We use stylegan3-r-ffhq-1024x1024.pkl checkpoint in our experiments.

Downloading the datasets

In our experiments, we use FFHQ dataset for training our face reconstruction network. Also we used MOBIO and LFW datasets for evaluation. All of these datasets are publicly available. To download the datasets please refer to their websites:

Downloading Pretrained models

In our experiments, we used different face recognition models. Among which ArcFace and ElasticFace are integrated in Bob and the code automatically downloads the checkpoints. For other models (such AttentionNet, Swin, etc.) we used FaceX-Zoo repository. Therefore you need to download checkpoints from this repositpry (this table) and put in a folder with the following structure:

├── backbones
│   ├── AttentionNet92
│   │   └── Epoch_17.pt
│   ├── HRNet
│   │   └── Epoch_17.pt
│   └── SwinTransformer_S
│       └── Epoch_17.pt
└── heads

You can use other models from FaceX-Zoo and put in this folder with the aforementioned structure.

Configuring the directories of the datasets

Now that you have downloaded the three databases. You need to set the paths to those in the configuration files. Bob supports a configuration file (~/.bobrc) in your home directory to specify where the databases are located. Please specify the paths for the database like below:

# Setup FFHQ directory
$ bob config set  bob.db.ffhq.directory [YOUR_FFHQ_IMAGE_DIRECTORY]

# Setup MOBIO directories
$ bob config set  bob.db.mobio.directory [YOUR_MOBIO_IMAGE_DIRECTORY]
$ bob config set  bob.db.mobio.annotation_directory [YOUR_MOBIO_ANNOTATION_DIRECTORY]

# Setup LFW directories
$ bob config set  bob.db.lfw.directory [YOUR_LFW_IMAGE_DIRECTORY]
$ bob config set  bob.bio.face.lfw.annotation_directory [YOUR_LFW_ANNOTATION_DIRECTORY]

If you use FaceX-Zoo models you need to define the paths to the checkpoints of FaceX-Zoo models too:

# Setup LFW directories
$ bob config set  facexzoo.checkpoints.directory [YOUR_FACEXZOO_CHECKPOINTS_DIRECTORY]

Running the Experiments

Step 1: Training face reconstruction model

You can train the face reconstruction model by running train.py. For example, for an attack against ElasticFace using ArcFace in loss function, you can use the following commands:

python train.py --path_stylegan_repo <path_stylegan_repo>  --path_stylegan_checkpoint <path_stylegan_checkpoint>       \
                --FR_system ElasticFace   --FR_loss  ArcFace

Note: Pre-trained models for attacks against ArcFace and ElasticFace models are available in ./checkpoints folder of this repository.

Step 2: Evaluation

After the model is trained, you can use it to run evaluation. For evaluation, you can use evaluation_pipeline script and evaluate on an evaluation dataset (MOBIO/LFW). For example, for evaluation of a face reconstruction of ArcFace on MOBIO dataset, you can use the following commands:

python evaluation_pipeline.py --path_stylegan_repo <path_stylegan_repo>  --path_stylegan_checkpoint <path_stylegan_checkpoint>    \
                              --FR_system ArcFace   --checkpoint <path_checkpoint>    --dataset MOBIO 

After you ran the evaluation pipeline, you can use eval_SAR_TMR.py to caluclate the vulnaribility in terms of Sucess Attack Rate (SAR).

Contact

For questions or reporting issues to this software package, please contact the first author (hatef.otroshi@idiap.ch) or our development mailing list.