Skip to content
Snippets Groups Projects
Commit fe067de7 authored by Hatef OTROSHI's avatar Hatef OTROSHI
Browse files

add README.md

parent 7ba9f79d
No related branches found
No related tags found
No related merge requests found
# HyperFace: Generating Synthetic Face Recognition Datasets by Exploring Face Embedding Hypersphere
This repository contains source code to reproduce the following paper:
```bibtex
@inproceedings{shahreza2025hyperface,
title={HyperFace: Generating Synthetic Face Recognition Datasets by Exploring Face Embedding Hypersphere},
author={Hatef Otroshi Shahreza and S{\'e}bastien Marcel},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025}
}
```
![](assets/HyperFace_blockdiagram.png)
Project page: https://www.idiap.ch/paper/hyperface/
## Download HyperFace Dataset
You can download HyperFace dataset from the [project page](https://www.idiap.ch/paper/hyperface/).
## Installation
```sh
conda create -n hyperface python=3.10
conda activate hyperface
# Install requirements
pip install -r requirements.txt
```
We use [Arc2Face](https://github.com/foivospar/Arc2Face) as face generator model.
You can download pretrained models with the instructions given in the [Arc2Face respository](https://github.com/foivospar/Arc2Face?tab=readme-ov-file#download-models).
## Generating HyperFace Dataset
### Step 1: Extract Embeddings (for initilization and regularization)
For the initilization and also the regularization of HyperFace optimization, we need to extract embeddings of face images using
a pretrained face recognition model using `extract_emb_mp.py` script. Ther extracted embeddings is stored as numpy file which is used
in the next step to solve the HyperFace optimization.
```
python extract_emb_mp.py --path_dataset <path_to_face_dataset> --n_ids 10000 --path_save ./points_init
```
You can use synthetic face images generated by pretrained face generator models (such as StyleGAN or a diffusion model) or real images (such as BUPT)
in this step. An ablation study on the effect of face generator model is reported in Table 5 of the paper.
You can use `generate_stylegan.py` and `generate_ldm.py` scripts to generate images with StyleGAN and LDM, respectively.
### Step 2: Solving HyperFace Optimization
To solve the HyperFace (stochastic) optimization, you can use `solve_hyperface.py` script:
```
python solve_hyperface.py \
--path_init_points ./points/points_init \
--path_save ./points/hyperface --n_ids $n_ids --n_gallery $n_gallery --optimisation_batch $optimisation_batch
```
- Note that `optimisation_batch` can be small (e.g., 512) for stochastic optimization, or equal to number of IDs for full batch optimization.
For more detail of stochastic optimisation and comparison with full optimisation please check Appendix A-B of the paper.
### Step 3: Generating HyperFace Dataset
After solving the HyperFace optimization, you can generate the HyperFace dataset using `generate_hyperface.py` script:
```
python generate_hyperface.py --path_points ./points/hyperface \
--path_save /HyperFace_dataset/ \
--num_samples $num_samples \
--start_index $start_index \
--chunck $chunck \
--intra_class_threshold $intra_class_threshold \
--intra_class_sigma $intra_class_sigma
```
Note that you can run the above script in parallel to generate the HyperFace dataset: `start_index` is the starting index of idenity to be generated
and `chunck` is the number of idenitities to be generated with this script.
A sample script to submit to SLURM in parallel is provided in `generate_hyperface_submit_slurm.run`
## Training Face Recognition Models
After generating the HYperFace dataset, you can train face recognition using the script in `face_recognition` folder from [AdaFace](https://github.com/mk-minchul/AdaFace) repository.
## Inferece (Face Recognition)
To extract features using the pretrained face recognition models, you can use the following script:
```python
from face_alignment import align
from inference import load_pretrained_model, to_input
checkpoint = 'model_checkpoint.ckpt'
model = load_pretrained_model(checkpoint, architecture='ir_50')
path = 'path_to_the_image'
aligned_rgb_img = align.get_aligned_face(path)
bgr_input = to_input(aligned_rgb_img)
feature, _ = model(bgr_input)
```
- Note that our implementation assumes that input to the model has 112*112 resolution and is BGR color channel as in cv2 package.
- The input image needs to be aligned before passing to the network.
## Contact
For questions or reporting issues for this repository, please contact the first author.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment