Commit c0cb1273 authored by Amir MOHAMMADI's avatar Amir MOHAMMADI
Browse files

learning rate should go to optimizer

parent 3b93cc90
Pipeline #11841 canceled with stages
......@@ -56,10 +56,10 @@ The example consists in training a very simple **CNN** with `MNIST` dataset in 4
>>>
>>> loss = BaseLoss(tf.nn.sparse_softmax_cross_entropy_with_logits, tf.reduce_mean)
>>>
>>> optimizer = tf.train.GradientDescentOptimizer(0.001)
>>>
>>> learning_rate = constant(base_learning_rate=0.001)
>>>
>>> optimizer = tf.train.GradientDescentOptimizer(learning_rate)
>>>
>>> trainer = Trainer
Now that you have defined your data, architecture, loss and training algorithm you can save this in a python file,
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment