New package
This package is part of the signal-processing and machine learning toolbox Bob. It contains source code to reproduce experiments from the article On the use of automatically generated ijcb2021_synthetic_dataset image datasets for becnhmarking face recognition.
It mainly contains tools to perform the following operations:
- Projection of a face dataset into StyleGAN2's latent space (./bin/project_db.py)
- Computation of semantic editing latent directions from those projections (./bin/latent_analysis.py)
- Generation of a synthetic dataset using the precomputed latent directions (./bin/generate_db.py)
- Running a face recognition benchmark experiment on the synthetic dataset (bob bio pipelines vanilla-biometrics)
Installation
This project contains two distinct conda environments:
- generation_env.yml This environment is based on Bob 8 and Tensorflow 1, and is used for step 1 to 3 (dataset projection, latent analysis and database generation)
- benchmark_env.yml This environment is based on Bob 9 and Tensorflow 2, and is used for step 4 (running the benchmark experiments).
To install everything correctly, after pulling this repository from Gitlab, you need to
1. Install both environments
conda env create -f generation_env.yml
conda env create -f benchmark_env.yml
- Run buildout to extend the generation environment with the tools available in this repository:: conda activate synface # Activate the generation env. buildout -c buildout.cfg # Run buildout
This second step creates a bin folder containing in particular
- ./bin/python Custom Python executable containing the generation env. extended with bob.paper.ijcb2021_synthetic_dataset
- ./bin/project_db.py Dataset projection script (entry point)
- ./bin/latent_analysis.py Script for computing latent directions (entry point)
- ./bin/generate_db.py Synthetic dataset generation script (entry point)
- ./bin/download_models.py Utilitary to download required pretrained models (entry point)
How to run
Download model dependencies
This project relies on several preexisting pretrained models:
- DLIB Face Landmark detector for cropping and aligning the projected faces exactly as in FFHQ. ([Example](http://dlib.net/face_landmark_detection.py.html))
- StyleGAN2 as the main face synthesis network. ([Original paper](https://arxiv.org/abs/1912.04958), [Official repository](https://github.com/NVlabs/stylegan2)). We are using Config-F, trained on FFHQ at resolution 1024 x 1024
- A pretrained VGG16 model, used to compute a perceptual loss between projected and target image ([Original paper](https://arxiv.org/abs/1801.03924))
In order to download those models, one must specify the destination path in the ~/.bobrc file, through the following commands:
conda activate synface
bob config set sg2_morph.dlib_lmd_path </path/to/dlib/landmark/detector.dat>
bob config set sg2_morph.sg2_path </path/to/stylegan2/pretrained/model.pkl>
bob config set sg2_morph.vgg16_path </path/to/vgg16/pretrained/model.pkl>
This should then enable to download the models once and for all by running
./bin/download_models.py
Prepare folder configuration
- ::
- # Absolute path of this repo, can be useful to launch execution on a grid due to some relative paths in the code bob config set bob.paper.ijcb2021_synthetic_dataset.path <path_of_this_repo> # Folder to store projected Multi-PIE latent projections bob config set bob.synface.multipie_projections <path_to_folder> # Folder containing Multi-PIE images bob config set bob.db.multipie.directory <path_to_folder> # Folder containing Multi-PIE face annotations bob config set bob.db.multipie.annotations_directory <path_to_folder> # Path to the Pickle file where to store computed latent directions bob config set bob.synface.latent_directions <path_to_folder>
Contact
For questions or reporting issues to this software package, contact our development mailing list.