.. [facenet2015] Schroff, Florian, Dmitry Kalenichenko, and James Philbin. "Facenet: A unified embedding for face recognition and clustering." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
In this library datasets are wrapped in **data shufflers**. Data shufflers are elements designed to do batching.
It has one basic functionality which is :py:meth:`bob.learn.tensorflow.datashuffler.Base.get_batch` functionality.
It is possible to either use Memory (:py:class:`bob.learn.tensorflow.datashuffler.Memory`) or
Disk (:py:class:`bob.learn.tensorflow.datashuffler.Disk`) data shufflers.
For the Memory data shufflers, as in the example, it is expected that the dataset is stored in `numpy.array`.
In the example that we provided the MNIST dataset was loaded and reshaped to `[n, w, h, c]` where `n` is the size
of the batch, `w` and `h` are the image width and height and `c` is the
number of channels.
Creating the architecture
.........................
Architectures are assembled using the Tensorflow graphs.
There are plenty of ways to doing it; you can either use the `tensorflow <https://www.tensorflow.org/api_docs/python/tf/Graph>`_ API directly
or use one of the several available contribs such as `tf-slim <https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim>`_,
`TFLearn <http://tflearn.org/>`_, etc...
Defining a loss and training
............................
The loss function can be defined by any set of tensorflow operations.
In our example, we used the `tf.nn.sparse_softmax_cross_entropy_with_logits` as loss function, but we also have some crafted
loss functions for Siamese :py:class:`bob.learn.tensorflow.loss.ContrastiveLoss` and Triplet networks :py:class:`bob.learn.tensorflow.loss.TripletLoss`.
The trainer is the real muscle here.
This element takes the inputs and trains the network.
As for the loss, we have specific trainers for Siamese (:py:class:`bob.learn.tensorflow.trainers.SiameseTrainer`) a
or :py:class:`bob.learn.tensorflow.datashuffler.TripletWithSelectionDisk`.
How the data is sampled ?
`````````````````````````
The paper [facenet2015]_ introduced a new strategy to select triplets to train triplet networks (this is better described
here :py:class:`bob.learn.tensorflow.datashuffler.TripletWithSelectionDisk` and :py:class:`bob.learn.tensorflow.datashuffler.TripletWithFastSelectionDisk`).
This triplet selection relies in the current state of the network and are extensions of `bob.learn.tensorflow.datashuffler.OnlineSampling`.
Activations
...........
For the activation of the layers we don't have any special wrapper.
For any class that inherits from :py:class:`bob.learn.tensorflow.layers.Layer` you can use directly tensorflow operations
in the keyword argument `activation`.
Solvers/Optimizer
.................
For the solvers we don't have any special wrapper.
For any class that inherits from :py:class:`bob.learn.tensorflow.trainers.Trainer` you can use directly tensorflow
`Optimizers <https://www.tensorflow.org/versions/master/api_docs/python/train.html#Optimizer>`_ in the keyword argument `optimizer_class`.
Learning rate
.............
We have two methods implemented to deal with the update of the learning rate.
The first one is the :py:class:`bob.learn.tensorflow.trainers.constant`, which is just a constant value along the training.
The second one is the :py:class:`bob.learn.tensorflow.trainers.exponential_decay`, which, as the name says, implements
an exponential decay of the learning rate along the training.
Initialization
..............
We have implemented some strategies to initialize the tensorflow variables.
Check it out `Initialization <py_api.html#initialization>`_.
Loss
....
Loss functions must be wrapped as a :py:class:`bob.learn.tensorflow.loss.BaseLoss` objects.
For instance, if you want to use the sparse softmax cross entropy loss between logits and labels you should do like this.
.. code-block:: python
>>> loss = BaseLoss(tf.nn.sparse_softmax_cross_entropy_with_logits, tf.reduce_mean)
As you can observe, you can pass directly tensorflow operations to this object.
We have also some crafted losses.
For instance, the loss :py:class:`bob.learn.tensorflow.loss.TripletLoss` is used to train triplet networks and the
:py:class:`bob.learn.tensorflow.loss.ContrastiveLoss` is used to train siamese networks.
Analyzers
.........
To be discussed.
Sandbox
-------
We have a sandbox of examples in a git repository `https://gitlab.idiap.ch/tiago.pereira/bob.learn.tensorflow_sandbox`