Skip to content
Snippets Groups Projects
Commit c0cb1273 authored by Amir MOHAMMADI's avatar Amir MOHAMMADI
Browse files

learning rate should go to optimizer

parent 3b93cc90
Branches
Tags
1 merge request!9Resolve "exponential decay learning rate is not working"
Pipeline #
...@@ -56,10 +56,10 @@ The example consists in training a very simple **CNN** with `MNIST` dataset in 4 ...@@ -56,10 +56,10 @@ The example consists in training a very simple **CNN** with `MNIST` dataset in 4
>>> >>>
>>> loss = BaseLoss(tf.nn.sparse_softmax_cross_entropy_with_logits, tf.reduce_mean) >>> loss = BaseLoss(tf.nn.sparse_softmax_cross_entropy_with_logits, tf.reduce_mean)
>>> >>>
>>> optimizer = tf.train.GradientDescentOptimizer(0.001)
>>>
>>> learning_rate = constant(base_learning_rate=0.001) >>> learning_rate = constant(base_learning_rate=0.001)
>>> >>>
>>> optimizer = tf.train.GradientDescentOptimizer(learning_rate)
>>>
>>> trainer = Trainer >>> trainer = Trainer
Now that you have defined your data, architecture, loss and training algorithm you can save this in a python file, Now that you have defined your data, architecture, loss and training algorithm you can save this in a python file,
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment