Exponential Moving Average + Batch Normalization + fixes in the losses
Hi @amohammadi I'm back to this package. I know we are not supposed to do that, but I was fine tuning so many things and finally I came with a final solution. This MR has 3 different features (despite the branch is called vgg16, there's no vgg16 here).
1 - The Logits estimator has an option to apply an exponential moving averages to the trainable variables and the loss
2 - I introduced the inception_v1
and inception_v2
with batch normalization (I just renamed the inference method that came from the code I copied and pasted). For this I also added some tests to keep track of the number of trainable variables
3 - I added parts of our loss in the tf.add_to_collection
LOSSES
Thanks for looking at it. I will look at yours in the afternoon
Edited by Tiago de Freitas Pereira