You can download HyperFace dataset from the [project page](https://www.idiap.ch/paper/hyperface/).
## Installation
```sh
conda create -n hyperface python=3.10
conda activate hyperface
# Install requirements
pip install-r requirements.txt
```
We use [Arc2Face](https://github.com/foivospar/Arc2Face) as face generator model.
You can download pretrained models with the instructions given in the [Arc2Face respository](https://github.com/foivospar/Arc2Face?tab=readme-ov-file#download-models).
## Generating HyperFace Dataset
### Step 1: Extract Embeddings (for initilization and regularization)
For the initilization and also the regularization of HyperFace optimization, we need to extract embeddings of face images using
a pretrained face recognition model using `extract_emb_mp.py` script. Ther extracted embeddings is stored as numpy file which is used
in the next step to solve the HyperFace optimization.
Note that you can run the above script in parallel to generate the HyperFace dataset: `start_index` is the starting index of idenity to be generated
and `chunck` is the number of idenitities to be generated with this script.
A sample script to submit to SLURM in parallel is provided in `generate_hyperface_submit_slurm.run`
## Training Face Recognition Models
After generating the HYperFace dataset, you can train face recognition using the script in `face_recognition` folder from [AdaFace](https://github.com/mk-minchul/AdaFace) repository.
## Inferece (Face Recognition)
To extract features using the pretrained face recognition models, you can use the following script: