Skip to content
Snippets Groups Projects
README.md 6.71 KiB
Newer Older
Hatef OTROSHI's avatar
Hatef OTROSHI committed
# Comprehensive Vulnerability Evaluation of Face Recognition Systems to Template Inversion Attacks via 3D Face Reconstruction 

This package is part of the signal-processing and machine learning toolbox [Bob](https://www.idiap.ch/software/bob).
It contains the source code to reproduce the following paper:
```BibTeX
@article{tpami2023ti3d,
    author    = {Hatef Otroshi Shahreza and S{\'e}bastien Marcel},
    title     = {Comprehensive Vulnerability Evaluation of Face Recognition Systems to Template Inversion Attacks Via 3D Face Reconstruction},
    journal   = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
    year      = {2023},
    volume    = {45},
    number    = {12},
    pages     = {14248-14265},
    doi       = {10.1109/TPAMI.2023.3312123}
  }
```
Hatef OTROSHI's avatar
Hatef OTROSHI committed
[Project page](https://www.idiap.ch/paper/gafar/)
Hatef OTROSHI's avatar
Hatef OTROSHI committed

## Installation
The installation instructions are based on [conda](https://conda.io/) and works on **Linux systems
only**. Therefore, please [install conda](https://conda.io/docs/install/quick.html#linux-miniconda-install) before continuing.

For installation, please download the source code of this paper and unpack it. Then, you can create a conda
environment with the following command:

```sh
$ git clone https://gitlab.idiap.ch/bob/bob.paper.tpami2023_face_ti
$ cd bob.paper.tpami2023_face_ti

# create the environment
$ conda create --name bob.paper.tpami2023_face_ti --file package-list.txt
# or 
# $ conda env create -f environment.yml

# activate the environment
$ conda activate bob.paper.tpami2023_face_ti  

# install paper package
$ pip install ./ --no-build-isolation  
```
We use [EG3D](https://github.com/NVlabs/eg3d) as a pretrained face generator network based on generative neural radiance fields (GNeRF). Therefore, you need to clone its git repository and download [available pretrained model](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/research/models/eg3d):
```sh
$ git clone https://github.com/NVlabs/eg3d.git
```
We use `ffhq512-128.pkl` [checkpoint](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/research/models/eg3d/files) in our experiments.

## Downloading the datasets
In our experiments, we use [FFHQ](https://github.com/NVlabs/ffhq-dataset) dataset for training our face reconstruction network.
Also we used [MOBIO](https://www.idiap.ch/dataset/mobio) and [LFW](http://vis-www.cs.umass.edu/lfw/) datasets for evaluation.
All of these datasets are publicly available. To download the datasets please refer to their websites:
- [FFHQ](https://github.com/NVlabs/ffhq-dataset)
- [MOBIO](https://www.idiap.ch/dataset/mobio)
- [LFW](http://vis-www.cs.umass.edu/lfw/)


Hatef OTROSHI's avatar
Hatef OTROSHI committed
## Downloading Pretrained models
In our experiments, we used different face recognition models. Among which ArcFace and ElasticFace are integrated in Bob and the code automatically downloads the checkpoints. For other models (such AttentionNet, Swin, etc.) we used [FaceX-Zoo repository](https://github.com/JDAI-CV/FaceX-Zoo). Therefore you need to download checkpoints from this repositpry ([this table](https://github.com/JDAI-CV/FaceX-Zoo/tree/main/training_mode#3-trained-models-and-logs)) and put in a folder with the following structure:
```
├── backbones
│   ├── AttentionNet92
│   │   └── Epoch_17.pt
│   ├── HRNet
│   │   └── Epoch_17.pt
│   ├── RepVGG_B1
│   │   └── Epoch_17.pt
Hatef OTROSHI's avatar
Hatef OTROSHI committed
│   └── SwinTransformer_S
│       └── Epoch_17.pt
Hatef OTROSHI's avatar
Hatef OTROSHI committed
└── heads
```
You can use other models from FaceX-Zoo and put in this folder with the aforementioned structure.


Hatef OTROSHI's avatar
Hatef OTROSHI committed
## Configuring the directories of the datasets
Now that you have downloaded the three databases. You need to set the paths to
those in the configuration files. [Bob](https://www.idiap.ch/software/bob) supports a configuration file
(`~/.bobrc`) in your home directory to specify where the
databases are located. Please specify the paths for the database like below:
```sh
# Setup FFHQ directory
$ bob config set  bob.db.ffhq.directory [YOUR_FFHQ_IMAGE_DIRECTORY]

# Setup MOBIO directories
$ bob config set  bob.db.mobio.directory [YOUR_MOBIO_IMAGE_DIRECTORY]
$ bob config set  bob.db.mobio.annotation_directory [YOUR_MOBIO_ANNOTATION_DIRECTORY]

# Setup LFW directories
$ bob config set  bob.db.lfw.directory [YOUR_LFW_IMAGE_DIRECTORY]
$ bob config set  bob.bio.face.lfw.annotation_directory [YOUR_LFW_ANNOTATION_DIRECTORY]
```
Hatef OTROSHI's avatar
Hatef OTROSHI committed
If you use FaceX-Zoo models you need to define the paths to the checkpoints of FaceX-Zoo models too:
```sh
# Setup LFW directories
$ bob config set  facexzoo.checkpoints.directory [YOUR_FACEXZOO_CHECKPOINTS_DIRECTORY]
```
Hatef OTROSHI's avatar
Hatef OTROSHI committed

## Running the Experiments
### Step 1: Training face reconstruction model
You can train the face reconstruction model by running `train.py`. For example, for blackbox attack against `ElasticFace` using `ArcFace` in loss function, you can use the following commands:
```sh
python train.py --path_eg3d_repo <path_eg3d_repo>  --path_eg3d_checkpoint <path_eg3d_checkpoint>       \
                --FR_system ElasticFace   --FR_loss  ArcFace  --path_ffhq_dataset <path_ffhq_dataset>  \
```
Hatef OTROSHI's avatar
Hatef OTROSHI committed

#### Pre-trained models (GaFaR Mapping Network)
[Checkpoints](https://www.idiap.ch/paper/gafar/static/files/checkpoints.zip) of trained models of the mapping network for whitebox and blackbox attacks are available in the [project page](https://www.idiap.ch/paper/gafar/).


Hatef OTROSHI's avatar
Hatef OTROSHI committed
### Step 2: Evaluation (Template Inversion)
After the model is  trained, you can use it to run evaluation.
For evaluation, you can use `evaluation_pipeline` script and evaluate on an evaluation dataset (MOBIO/LFW). For example, for evaluation of a face reconstruction of ElasticFace to attack the same system on MOBIO dataset, you can use the following commands:
```sh
python evaluation_pipeline.py --path_eg3d_repo <path_eg3d_repo>  --path_eg3d_checkpoint <path_eg3d_checkpoint>    \
                --FR_system ElasticFace  --FR_target  ElasticFace --attack GaFaR  --checkpoint <path_checkpoint>  \
                --dataset MOBIO
```
After you ran the evaluation pipeline, you can use `eval_SAR_TMR.py` to caluclate the vulnaribility in terms of Sucess Attack Rate (SAR).  


## Other Materials
### Project Page
You can find general information about this work, including general block diagarm of the proposed method and experiments in the [project page](https://www.idiap.ch/paper/gafar/).
### Dataset of Presebtation Attacks
As described in the paper, we used the reconstructed face images from MOBIO dataset and performed practical presentation attack
The captured images from our presentation attacks are [publicly available](https://www.idiap.ch/en/dataset/gafar).

## Contact
For questions or reporting issues to this software package, please contact the first author (hatef.otroshi@idiap.ch) or our development [mailing list](https://www.idiap.ch/software/bob/discuss).