diff --git a/README.rst b/README.rst
index edbe33ae5ccb24148244da2c1cde11cfc761445a..b99efdebfd210a1a57cd53e76b9b5a569c9689d9 100644
--- a/README.rst
+++ b/README.rst
@@ -16,7 +16,7 @@
 .. sectionauthor:: Laurent Colbois <laurent.colbois@idiap.ch>
 
 =============
- New package
+IJCB 2021 - Generation of synthetic face recognition datasets using StyleGAN2
 =============
 
 This package is part of the signal-processing and machine learning toolbox Bob_. It contains source code to reproduce experiments published in the following article::
@@ -49,7 +49,7 @@ License and usage
 
 This package is released under the `BSD-3 license <https://gitlab.idiap.ch/bob/bob.paper.ijcb2021_synthetic_dataset/-/blob/master/LICENSE>`_. It makes uses of some components from the
 `official release of the StyleGAN2 model <https://github.com/NVlabs/stylegan2>`_, which is itself released under the `Nvidia Source Code License-NC <https://gitlab.idiap.ch/bob/bob.paper.ijcb2021_synthetic_dataset/-/blob/master/bob/paper/ijcb2021_synthetic_dataset/stylegan2/LICENSE.txt>`_.
-Make sure to apply limitations of *both* licenses when reusing this work.
+Make sure to consider limitations induced by *both* licenses when reusing this work.
 Mainly, the StyleGAN2 code base can only be used for *research and evaluation purpose only*, therefore this project
 itself cannot be used for anything else.  
 
@@ -91,7 +91,7 @@ Download model dependencies
 The database generation in this project relies on several preexisting pretrained models:
 
 * **DLIB Face Landmark detector** for cropping and aligning the projected faces exactly as in FFHQ. (`Example <http://dlib.net/face_landmark_detection.py.html>`_)
-* **StyleGAN2** as the main face synthesis network. ([Original paper](https://arxiv.org/abs/1912.04958), `Official repository <https://github.com/NVlabs/stylegan2>`_. We are using Config-F, trained on FFHQ at resolution 1024 x 1024
+* **StyleGAN2** as the main face synthesis network. (`Original paper <https://arxiv.org/abs/1912.04958>`_, `Official repository <https://github.com/NVlabs/stylegan2>`_). We are using Config-F, trained on FFHQ at resolution 1024 x 1024
 * A pretrained **VGG16** model, used to compute a perceptual loss between projected and target image (`Original paper <https://arxiv.org/abs/1801.03924>`_)
 * A pretrained face recognition network (Inception-Resnet v2 trained on MSCeleb), to compute the embedding distance between identities in order to apply the ICT constraint.