Skip to content
Snippets Groups Projects
user avatar
Hatef OTROSHI authored
51f00ec9
History

Breaking Template Protection: Reconstruction of Face Images from Protected Facial Templates

This package is part of the signal-processing and machine learning toolbox Bob. It contains the source code to reproduce the following paper:

@inproceedings{fg2024_breaking_btp,
  title={Breaking Template Protection: Reconstruction of Face Images from Protected Facial Templates},
  author={Shahreza, Hatef Otroshi and Marcel, S{\'e}bastien},
  booktitle={2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG)},
  pages={1--7},
  year={2024},
  organization={IEEE}
}

Project page

Installation

The installation instructions are based on conda and works on Linux systems only. Therefore, please install conda before continuing.

For installation, please download the source code of this paper and unpack it. Then, you can create a conda environment with the following command:

$ git clone https://gitlab.idiap.ch/bob/bob.paper.tbob.paper.fg2024_breaking_btp
$ cd bob.paper.tbob.paper.fg2024_breaking_btp

# create the environment
$ conda create --name bob.paper.tbob.paper.fg2024_breaking_btp --file package-list.txt
# or 
# $ conda env create -f environment.yml

# activate the environment
$ conda activate bob.paper.tbob.paper.fg2024_breaking_btp  

# install paper package
$ pip install ./ --no-build-isolation  

Downloading the datasets

In our experiments, we use FFHQ dataset for training our face reconstruction network. Also we used MOBIO and LFW datasets for evaluation. All of these datasets are publicly available. To download the datasets please refer to their websites:

Downloading Pretrained models

In our experiments, we used different face recognition models. Among which ArcFace and ElasticFace are integrated in Bob and the code automatically downloads the checkpoints. For other models (such AttentionNet, Swin, etc.) we used FaceX-Zoo repository. Therefore you need to download checkpoints from this repositpry (this table) and put in a folder with the following structure:

├── backbones
│   ├── AttentionNet92
│   │   └── Epoch_17.pt
│   ├── HRNet
│   │   └── Epoch_17.pt
│   ├── RepVGG_B1
│   │   └── Epoch_17.pt
│   └── SwinTransformer_S
│       └── Epoch_17.pt
└── heads

You can use other models from FaceX-Zoo and put in this folder with the aforementioned structure.

Configuring the directories of the datasets

Now that you have downloaded the three databases. You need to set the paths to those in the configuration files. Bob supports a configuration file (~/.bobrc) in your home directory to specify where the databases are located. Please specify the paths for the database like below:

# Setup FFHQ directory
$ bob config set  bob.db.ffhq.directory [YOUR_FFHQ_IMAGE_DIRECTORY]

# Setup MOBIO directories
$ bob config set  bob.db.mobio.directory [YOUR_MOBIO_IMAGE_DIRECTORY]
$ bob config set  bob.db.mobio.annotation_directory [YOUR_MOBIO_ANNOTATION_DIRECTORY]

# Setup LFW directories
$ bob config set  bob.db.lfw.directory [YOUR_LFW_IMAGE_DIRECTORY]
$ bob config set  bob.bio.face.lfw.annotation_directory [YOUR_LFW_ANNOTATION_DIRECTORY]

# Setup AgeDB directories
$ bob config set  bob.db.agedb.directory [YOUR_AGEDB_IMAGE_DIRECTORY]

If you use FaceX-Zoo models you need to define the paths to the checkpoints of FaceX-Zoo models too:

# Setup LFW directories
$ bob config set  facexzoo.checkpoints.directory [YOUR_FACEXZOO_CHECKPOINTS_DIRECTORY]

Running the Experiments

Step 1: Generating Training Dataset

You can use the GenDataset.py to generate extract face recognition templates from FFHQ dataset. This data is used in the next step to train the model. For example, for ArcFace you can use the following commands:

python GenDataset.py --FR_system ArcFace

Step 2: Training face reconstruction model

You can train the face reconstruction model by running TrainNetwork/train.py. For example, for inversion attack against ArcFace templates protected by BioHashing using ArcFace in loss function, you can use the following commands:

python train.py --FR_system ArcFace   --FR_loss  ArcFace  --btp BioHashing --user_seed 0

Step 3: Evaluation (Template Inversion)

After the model is trained, you can use it to run evaluation. For evaluation, you can use eval/pipeline_attack.py script and evaluate on an evaluation dataset (MOBIO/LFW/AgeDB). For example, for evaluation of a face reconstruction of ArcFace templates protected by BioHashing on MOBIO dataset, you can use the following commands:

eval/pipeline_attack.py    --FR_system ArcFace   --FR_loss ArcFace  --btp BioHashing  --user_seed 0  --dataset MOBIO

After you ran the evaluation pipeline, you can use eval_SAR_TMR.py to caluclate the vulnaribility in terms of Sucess Attack Rate (SAR).

Contact

For questions or reporting issues to this software package, please contact the first author (hatef.otroshi@idiap.ch) or our development mailing list.