Skip to content
Snippets Groups Projects
Commit 0d5120e9 authored by Hatef OTROSHI's avatar Hatef OTROSHI
Browse files

[ADD] README

parent f8d37e86
No related branches found
No related tags found
No related merge requests found
README.md 0 → 100644
# Breaking Template Protection: Reconstruction of Face Images from Protected Facial Templates
This package is part of the signal-processing and machine learning toolbox [Bob](https://www.idiap.ch/software/bob).
It contains the source code to reproduce the following paper:
```BibTeX
@inproceedings{fg2024_breaking_btp,
title={Breaking Template Protection: Reconstruction of Face Images from Protected Facial Templates},
author={Shahreza, Hatef Otroshi and Marcel, S{\'e}bastien},
booktitle={2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG)},
pages={1--7},
year={2024},
organization={IEEE}
}
```
[Project page](https://www.idiap.ch/paper/gafar/)
## Installation
The installation instructions are based on [conda](https://conda.io/) and works on **Linux systems
only**. Therefore, please [install conda](https://conda.io/docs/install/quick.html#linux-miniconda-install) before continuing.
For installation, please download the source code of this paper and unpack it. Then, you can create a conda
environment with the following command:
```sh
$ git clone https://gitlab.idiap.ch/bob/bob.paper.tbob.paper.fg2024_breaking_btp
$ cd bob.paper.tbob.paper.fg2024_breaking_btp
# create the environment
$ conda create --name bob.paper.tbob.paper.fg2024_breaking_btp --file package-list.txt
# or
# $ conda env create -f environment.yml
# activate the environment
$ conda activate bob.paper.tbob.paper.fg2024_breaking_btp
# install paper package
$ pip install ./ --no-build-isolation
```
## Downloading the datasets
In our experiments, we use [FFHQ](https://github.com/NVlabs/ffhq-dataset) dataset for training our face reconstruction network.
Also we used [MOBIO](https://www.idiap.ch/dataset/mobio) and [LFW](http://vis-www.cs.umass.edu/lfw/) datasets for evaluation.
All of these datasets are publicly available. To download the datasets please refer to their websites:
- [FFHQ](https://github.com/NVlabs/ffhq-dataset)
- [MOBIO](https://www.idiap.ch/dataset/mobio)
- [LFW](http://vis-www.cs.umass.edu/lfw/)
- [AgeDB](https://ibug.doc.ic.ac.uk/resources/agedb/)
## Downloading Pretrained models
In our experiments, we used different face recognition models. Among which ArcFace and ElasticFace are integrated in Bob and the code automatically downloads the checkpoints. For other models (such AttentionNet, Swin, etc.) we used [FaceX-Zoo repository](https://github.com/JDAI-CV/FaceX-Zoo). Therefore you need to download checkpoints from this repositpry ([this table](https://github.com/JDAI-CV/FaceX-Zoo/tree/main/training_mode#3-trained-models-and-logs)) and put in a folder with the following structure:
```
├── backbones
│ ├── AttentionNet92
│ │ └── Epoch_17.pt
│ ├── HRNet
│ │ └── Epoch_17.pt
│ ├── RepVGG_B1
│ │ └── Epoch_17.pt
│ └── SwinTransformer_S
│ └── Epoch_17.pt
└── heads
```
You can use other models from FaceX-Zoo and put in this folder with the aforementioned structure.
## Configuring the directories of the datasets
Now that you have downloaded the three databases. You need to set the paths to
those in the configuration files. [Bob](https://www.idiap.ch/software/bob) supports a configuration file
(`~/.bobrc`) in your home directory to specify where the
databases are located. Please specify the paths for the database like below:
```sh
# Setup FFHQ directory
$ bob config set bob.db.ffhq.directory [YOUR_FFHQ_IMAGE_DIRECTORY]
# Setup MOBIO directories
$ bob config set bob.db.mobio.directory [YOUR_MOBIO_IMAGE_DIRECTORY]
$ bob config set bob.db.mobio.annotation_directory [YOUR_MOBIO_ANNOTATION_DIRECTORY]
# Setup LFW directories
$ bob config set bob.db.lfw.directory [YOUR_LFW_IMAGE_DIRECTORY]
$ bob config set bob.bio.face.lfw.annotation_directory [YOUR_LFW_ANNOTATION_DIRECTORY]
# Setup AgeDB directories
$ bob config set bob.db.agedb.directory [YOUR_AGEDB_IMAGE_DIRECTORY]
```
If you use FaceX-Zoo models you need to define the paths to the checkpoints of FaceX-Zoo models too:
```sh
# Setup LFW directories
$ bob config set facexzoo.checkpoints.directory [YOUR_FACEXZOO_CHECKPOINTS_DIRECTORY]
```
## Running the Experiments
### Step 1: Generating Training Dataset
You can use the `GenDataset.py` to generate extract face recognition templates from FFHQ dataset. This data is used in the next step to train the model. For example, for ArcFace you can use the following commands:
```sh
python GenDataset.py --FR_system ArcFace
```
### Step 2: Training face reconstruction model
You can train the face reconstruction model by running `TrainNetwork/train.py`. For example, for inversion attack against `ArcFace` templates protected by `BioHashing` using `ArcFace` in loss function, you can use the following commands:
```sh
python train.py --FR_system ArcFace --FR_loss ArcFace --btp BioHashing --user_seed 0
```
### Step 3: Evaluation (Template Inversion)
After the model is trained, you can use it to run evaluation.
For evaluation, you can use `eval/pipeline_attack.py` script and evaluate on an evaluation dataset (MOBIO/LFW/AgeDB). For example, for evaluation of a face reconstruction of ArcFace templates protected by BioHashing on MOBIO dataset, you can use the following commands:
```sh
eval/pipeline_attack.py --FR_system ArcFace --FR_loss ArcFace --btp BioHashing --user_seed 0 --dataset MOBIO
```
After you ran the evaluation pipeline, you can use `eval_SAR_TMR.py` to caluclate the vulnaribility in terms of Sucess Attack Rate (SAR).
## Contact
For questions or reporting issues to this software package, please contact the first author (hatef.otroshi@idiap.ch) or our development [mailing list](https://www.idiap.ch/software/bob/discuss).
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment