@@ -47,8 +47,8 @@ If you use this package and/or its results, please consider citing the paper.
License and usage
-----------------
This package is released under the `BSD-3 license`__. __license_ It makes uses of some components from the
`official release of the StyleGAN2 model`__, __stylegan2_ which is itself released under the `Nvidia Source Code License-NC`__ __nvidialicense_.
This package is released under the `BSD-3 license <https://gitlab.idiap.ch/bob/bob.paper.ijcb2021_synthetic_dataset/-/blob/master/LICENSE>`_. It makes uses of some components from the
`official release of the StyleGAN2 model <https://github.com/NVlabs/stylegan2>`_, which is itself released under the `Nvidia Source Code License-NC <https://gitlab.idiap.ch/bob/bob.paper.ijcb2021_synthetic_dataset/-/blob/master/bob/paper/ijcb2021_synthetic_dataset/stylegan2/LICENSE.txt>`_.
Make sure to apply limitations of *both* licenses when reusing this work.
Mainly, the StyleGAN2 code base can only be used for *research and evaluation purpose only*, therefore this project
itself cannot be used for anything else.
...
...
@@ -90,9 +90,9 @@ Download model dependencies
***************************
The database generation in this project relies on several preexisting pretrained models:
* **DLIB Face Landmark detector** for cropping and aligning the projected faces exactly as in FFHQ. ([Example](http://dlib.net/face_landmark_detection.py.html))
* **StyleGAN2** as the main face synthesis network. ([Original paper](https://arxiv.org/abs/1912.04958), [Official repository](https://github.com/NVlabs/stylegan2)). We are using Config-F, trained on FFHQ at resolution 1024 x 1024
* A pretrained **VGG16** model, used to compute a perceptual loss between projected and target image ([Original paper](https://arxiv.org/abs/1801.03924))
* **DLIB Face Landmark detector** for cropping and aligning the projected faces exactly as in FFHQ. (`Example <http://dlib.net/face_landmark_detection.py.html>`_)
* **StyleGAN2** as the main face synthesis network. ([Original paper](https://arxiv.org/abs/1912.04958), `Official repository <https://github.com/NVlabs/stylegan2>`_. We are using Config-F, trained on FFHQ at resolution 1024 x 1024
* A pretrained **VGG16** model, used to compute a perceptual loss between projected and target image (`Original paper <https://arxiv.org/abs/1801.03924>`_)
* A pretrained face recognition network (Inception-Resnet v2 trained on MSCeleb), to compute the embedding distance between identities in order to apply the ICT constraint.
In order to download those models, one must specify the destination path in the `~/.bobrc` file, through the following commands:
...
...
@@ -112,7 +112,7 @@ This should then enable to download the models once and for all by running
Download database dependencies
******************************
In order to compute latent directions by projection, you need to download the [Multi-PIE dataset](http://www.cs.cmu.edu/afs/cs/project/PIE/MultiPie/Multi-Pie/Home.html) in the location of your choice,
In order to compute latent directions by projection, you need to download the `Multi-PIE dataset <http://www.cs.cmu.edu/afs/cs/project/PIE/MultiPie/Multi-Pie/Home.html>`_ in the location of your choice,
then prepare the folder configuration as explained in the following section.
...
...
@@ -218,7 +218,4 @@ You can also contact the first author_.