diff --git a/doc/results/baselines/index.rst b/doc/results/baselines/index.rst index ba45bd7518e9daf022f2d0fcbbc9d51f4e8c9c62..5b97c0ebec7f9c3e0a4f945b5f9aa001ec8d1f55 100644 --- a/doc/results/baselines/index.rst +++ b/doc/results/baselines/index.rst @@ -15,7 +15,8 @@ F1 Scores (micro-level) U-Net Models are trained for a fixed number of 1000 epochs, with a learning rate of 0.001 until epoch 900 and then 0.0001 until the end of the training, after being initialized with a VGG-16 backend. Little W-Net models are - trained using a cosine anneling strategy (see [SMITH-2017]_) for 2000 epochs. + trained using a cosine anneling strategy (see [GALDRAN-2020]_ and + [SMITH-2017]_) for 2000 epochs. * During the training session, an unaugmented copy of the training set is used as validation set. We keep checkpoints for the best performing networks based on such validation set. The best performing network during training is