From c0cb1273fd6533c43cce82aa09227beedde5e04d Mon Sep 17 00:00:00 2001 From: Amir MOHAMMADI <amir.mohammadi@idiap.ch> Date: Tue, 29 Aug 2017 11:54:24 +0200 Subject: [PATCH] learning rate should go to optimizer --- doc/user_guide.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/user_guide.rst b/doc/user_guide.rst index 3b15d2b6..b9b8db42 100644 --- a/doc/user_guide.rst +++ b/doc/user_guide.rst @@ -56,10 +56,10 @@ The example consists in training a very simple **CNN** with `MNIST` dataset in 4 >>> >>> loss = BaseLoss(tf.nn.sparse_softmax_cross_entropy_with_logits, tf.reduce_mean) >>> - >>> optimizer = tf.train.GradientDescentOptimizer(0.001) - >>> >>> learning_rate = constant(base_learning_rate=0.001) >>> + >>> optimizer = tf.train.GradientDescentOptimizer(learning_rate) + >>> >>> trainer = Trainer Now that you have defined your data, architecture, loss and training algorithm you can save this in a python file, -- GitLab