[sphinx] Fixing WArnings

parent 16c9feff
......@@ -2,6 +2,20 @@
.. Tiago de Freitas Pereira <tiago.pereira@idiap.ch>
.. Thu 30 Jan 08:46:53 2014 CET
.. image:: http://img.shields.io/badge/docs-stable-yellow.png
:target: http://pythonhosted.org/bob.learn.tensorflow/index.html
.. image:: http://img.shields.io/badge/docs-latest-orange.png
:target: https://www.idiap.ch/software/bob/docs/latest/bob/bob.learn.tensorflow/master/index.html
.. image:: https://gitlab.idiap.ch/bob/bob.learn.tensorflow/badges/master/build.svg
:target: https://gitlab.idiap.ch/bob/bob.learn.tensorflow/commits/master
.. image:: https://img.shields.io/badge/gitlab-project-0000c0.svg
:target: https://gitlab.idiap.ch/bob/bob.learn.tensorflow
.. image:: http://img.shields.io/pypi/v/bob.learn.tensorflow.png
:target: https://pypi.python.org/pypi/bob.learn.tensorflow
.. image:: http://img.shields.io/pypi/dm/bob.learn.tensorflow.png
:target: https://pypi.python.org/pypi/bob.learn.tensorflow
===========================
Bob support for tensorflow
===========================
......
# see https://docs.python.org/3/library/pkgutil.html
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
# gets sphinx autodoc done right - don't remove it
__all__ = [_ for _ in dir() if not _.startswith('_')]
from .ExperimentAnalizer import ExperimentAnalizer
from .SoftmaxAnalizer import SoftmaxAnalizer
......
......@@ -15,6 +15,7 @@ class Base(object):
The class provide base functionalities to shuffle the data to train a neural network
**Parameters**
data:
Input data to be trainer
......
......@@ -21,6 +21,7 @@ class Disk(Base):
The data is loaded on the fly,.
**Parameters**
data:
Input data to be trainer
......
......@@ -15,6 +15,7 @@ class Memory(Base):
This datashuffler deal with memory databases that are stored in a :py:class`numpy.array`
**Parameters**
data:
Input data to be trainer
......
......@@ -3,12 +3,12 @@
# @author: Tiago de Freitas Pereira <tiago.pereira@idiap.ch>
# @date: Wed 11 May 2016 09:39:36 CEST
import numpy
import tensorflow as tf
from .Base import Base
from bob.learn.tensorflow.network import SequenceNetwork
class OnLineSampling(object):
class OnlineSampling(Base):
"""
This data shuffler uses the current state of the network to select the samples.
This class is not meant to be used, but extended.
......
......@@ -7,7 +7,7 @@ import numpy
import tensorflow as tf
from .Memory import Memory
from Triplet import Triplet
from .Triplet import Triplet
from bob.learn.tensorflow.datashuffler.Normalizer import Linear
......
......@@ -6,9 +6,9 @@
import numpy
import tensorflow as tf
from Disk import Disk
from Triplet import Triplet
from OnlineSampling import OnLineSampling
from .Disk import Disk
from .Triplet import Triplet
from .OnlineSampling import OnlineSampling
from scipy.spatial.distance import euclidean, cdist
import logging
......@@ -16,7 +16,7 @@ logger = logging.getLogger("bob.learn.tensorflow")
from bob.learn.tensorflow.datashuffler.Normalizer import Linear
class TripletWithFastSelectionDisk(Triplet, Disk, OnLineSampling):
class TripletWithFastSelectionDisk(Triplet, Disk, OnlineSampling):
"""
This data shuffler generates triplets from :py:class:`bob.learn.tensorflow.datashuffler.Triplet` and
:py:class:`bob.learn.tensorflow.datashuffler.Disk` shufflers.
......@@ -27,12 +27,11 @@ class TripletWithFastSelectionDisk(Triplet, Disk, OnLineSampling):
"Facenet: A unified embedding for face recognition and clustering." Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition. 2015.
In this shuffler, the triplets are selected as the following:
1. Select M identities
2. Get N pairs anchor-positive (for each M identities) such that the argmax(anchor, positive)
3. For each pair anchor-positive, find the "semi-hard" negative samples such that
argmin(||f(x_a) - f(x_p)||^2 < ||f(x_a) - f(x_n)||^2
1. Select M identities.
2. Get N pairs anchor-positive (for each M identities) such that the argmax(anchor, positive).
3. For each pair anchor-positive, find the "semi-hard" negative samples such that :math:`argmin(||f(x_a) - f(x_p)||^2 < ||f(x_a) - f(x_n)||^2`.
**Parameters**
......@@ -142,8 +141,6 @@ class TripletWithFastSelectionDisk(Triplet, Disk, OnLineSampling):
samples_a[i, ...] = self.get_anchor(anchor_labels[i])
embedding_a = self.project(samples_a)
print "EMBEDDING {0} ".format(embedding_a[:, 0])
# Getting the positives
samples_p, embedding_p, d_anchor_positive = self.get_positives(anchor_labels, embedding_a)
samples_n = self.get_negative(anchor_labels, embedding_a, d_anchor_positive)
......
......@@ -8,7 +8,7 @@ import tensorflow as tf
from .Disk import Disk
from .Triplet import Triplet
from .OnlineSampling import OnLineSampling
from .OnlineSampling import OnlineSampling
from scipy.spatial.distance import euclidean
from bob.learn.tensorflow.datashuffler.Normalizer import Linear
......@@ -17,12 +17,13 @@ logger = logging.getLogger("bob.learn.tensorflow")
from bob.learn.tensorflow.datashuffler.Normalizer import Linear
class TripletWithSelectionDisk(Triplet, Disk, OnLineSampling):
class TripletWithSelectionDisk(Triplet, Disk, OnlineSampling):
"""
This data shuffler generates triplets from :py:class:`bob.learn.tensorflow.datashuffler.Triplet` shufflers.
The selection of the triplets are random.
**Parameters**
data:
Input data to be trainer
......
......@@ -6,14 +6,14 @@
import numpy
import tensorflow as tf
from OnlineSampling import OnLineSampling
from Memory import Memory
from Triplet import Triplet
from .OnlineSampling import OnlineSampling
from .Memory import Memory
from .Triplet import Triplet
from scipy.spatial.distance import euclidean
from bob.learn.tensorflow.datashuffler.Normalizer import Linear
class TripletWithSelectionMemory(Triplet, Memory, OnLineSampling):
class TripletWithSelectionMemory(Triplet, Memory, OnlineSampling):
"""
This data shuffler generates triplets from :py:class:`bob.learn.tensorflow.datashuffler.Triplet` and
:py:class:`bob.learn.tensorflow.datashuffler.Memory` shufflers.
......@@ -24,12 +24,11 @@ class TripletWithSelectionMemory(Triplet, Memory, OnLineSampling):
"Facenet: A unified embedding for face recognition and clustering." Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition. 2015.
In this shuffler, the triplets are selected as the following:
1. Select M identities
2. Get N pairs anchor-positive (for each M identities) such that the argmax(anchor, positive)
3. For each pair anchor-positive, find the "semi-hard" negative samples such that
argmin(||f(x_a) - f(x_p)||^2 < ||f(x_a) - f(x_n)||^2
1. Select M identities.
2. Get N pairs anchor-positive (for each M identities) such that the argmax(anchor, positive).
3. For each pair anchor-positive, find the "semi-hard" negative samples such that :math:`argmin(||f(x_a) - f(x_p)||^2 < ||f(x_a) - f(x_n)||^2`
**Parameters**
......
# see https://docs.python.org/3/library/pkgutil.html
from .Base import Base
from .OnlineSampling import OnlineSampling
from .Siamese import Siamese
from .Triplet import Triplet
from .Memory import Memory
from .Disk import Disk
from .OnlineSampling import OnLineSampling
from .SiameseMemory import SiameseMemory
from .TripletMemory import TripletMemory
......
......@@ -14,6 +14,7 @@ class Layer(object):
Layer base class
**Parameters**
name: str
The name of the layer
......
......@@ -13,7 +13,7 @@ from .InputLayer import InputLayer
def __appropriate__(*args):
"""Says object was actually declared here, an not on the import module.
Parameters:
**Parameters**
*args: An iterable of objects to modify
......
......@@ -17,9 +17,10 @@ class ContrastiveLoss(BaseLoss):
http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
L = 0.5 * (Y) * D^2 + 0.5 * (1-Y) * {max(0, margin - D)}^2
:math:`L = 0.5 * (Y) * D^2 + 0.5 * (1-Y) * {max(0, margin - D)}^2`
**Parameters**
left_feature:
First element of the pair
......
......@@ -19,9 +19,10 @@ class TripletLoss(BaseLoss):
"Facenet: A unified embedding for face recognition and clustering."
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
L = sum( |f_a - f_p|^2 - |f_a - f_n|^2 + \lambda)
:math:`L = sum( |f_a - f_p|^2 - |f_a - f_n|^2 + \lambda)`
**Parameters**
left_feature:
First element of the pair
......
......@@ -317,9 +317,9 @@ class SequenceNetwork(six.with_metaclass(abc.ABCMeta, object)):
open(path+"_sequence_net.pickle", 'w').write(self.pickle_architecture)
return saver.save(session, path)
def load(self, path, clear_devices=False):
def load(self, path, clear_devices=False, session_from_scratch=False):
session = Session.instance().session
session = Session.instance(new=session_from_scratch).session
self.sequence_net = pickle.loads(open(path+"_sequence_net.pickle").read())
if clear_devices:
......
......@@ -224,10 +224,10 @@ class VGG16_mod(SequenceNetwork):
weights_initialization=Xavier(seed=seed, use_gpu=self.use_gpu),
bias_initialization=Constant(use_gpu=self.use_gpu)
))
self.add(AveragePooling(name="pooling5", strides=[1, 2, 2, 1]))
self.add(AveragePooling(name="pooling5", shape=[1, 7, 7, 1], strides=[1, 7, 7, 1]))
if do_dropout:
self.add(Dropout(name="dropout", keep_prob=0.4))
self.add(Dropout(name="dropout", keep_prob=0.5))
self.add(FullyConnected(name="fc8", output_dim=n_classes,
activation=None,
......
......@@ -4,11 +4,12 @@
# @date: Thu 13 Oct 2016 13:35 CEST
import numpy
from bob.learn.tensorflow.datashuffler import Memory, SiameseMemory, TripletMemory, Disk, SiameseDisk, TripletDisk, ImageAugmentation
from bob.learn.tensorflow.network import Chopra
from bob.learn.tensorflow.datashuffler import Memory, SiameseMemory, TripletMemory, ImageAugmentation
from bob.learn.tensorflow.network import Chopra, SequenceNetwork
from bob.learn.tensorflow.loss import BaseLoss, ContrastiveLoss, TripletLoss
from bob.learn.tensorflow.trainers import Trainer, SiameseTrainer, TripletTrainer, constant
from .test_cnn_scratch import validate_network
import pkg_resources
from bob.learn.tensorflow.utils import load_mnist
import tensorflow as tf
......@@ -99,7 +100,10 @@ def test_cnn_trainer():
iterations=iterations,
analizer=None,
prefetch=False,
learning_rate=constant(0.05, name="regular_lr"),
optimizer=tf.train.AdamOptimizer(name="adam_softmax"),
temp_dir=directory)
trainer.train(train_data_shuffler)
accuracy = validate_network(validation_data, validation_labels, architecture)
......@@ -139,6 +143,7 @@ def test_siamesecnn_trainer():
prefetch=False,
analizer=None,
learning_rate=constant(0.05, name="siamese_lr"),
optimizer=tf.train.AdamOptimizer(name="adam_siamese"),
temp_dir=directory)
trainer.train(train_data_shuffler)
......@@ -181,6 +186,7 @@ def test_tripletcnn_trainer():
prefetch=False,
analizer=None,
learning_rate=constant(0.05, name="triplet_lr"),
optimizer=tf.train.AdamOptimizer(name="adam_triplet"),
temp_dir=directory)
trainer.train(train_data_shuffler)
......
......@@ -8,7 +8,6 @@
Some unit tests that create networks on the fly
"""
import numpy
import pkg_resources
from bob.learn.tensorflow.utils import load_mnist
......@@ -38,9 +37,8 @@ def test_load_test_cnn():
# Creating datashufflers
validation_data = numpy.reshape(validation_data, (validation_data.shape[0], 28, 28, 1))
network = SequenceNetwork()
network.load(pkg_resources.resource_filename(__name__, 'data/cnn_mnist/model.ckp'))
network.load(pkg_resources.resource_filename(__name__, 'data/cnn_mnist/model.ckp'), session_from_scratch=True)
accuracy = validate_network(validation_data, validation_labels, network)
assert accuracy > 80
del network
......@@ -81,8 +81,10 @@ def test_cnn_pretrained():
iterations=iterations,
analizer=None,
prefetch=False,
learning_rate=constant(0.05, name="lr"),
learning_rate=constant(0.05, name="regular_lr"),
optimizer=tf.train.AdamOptimizer(name="adam_pretrained_model"),
temp_dir=directory)
import ipdb; ipdb.set_trace();
trainer.train(train_data_shuffler)
accuracy = validate_network(validation_data, validation_labels, scratch)
assert accuracy > 85
......@@ -99,9 +101,10 @@ def test_cnn_pretrained():
iterations=iterations+200,
analizer=None,
prefetch=False,
learning_rate=constant(0.05, name="lr2"),
learning_rate=None,
temp_dir=directory2,
model_from_file=os.path.join(directory, "model.ckp"))
trainer.train(train_data_shuffler)
accuracy = validate_network(validation_data, validation_labels, scratch)
......
......@@ -234,3 +234,44 @@ def test_diskaudio_shuffler():
placeholders = data_shuffler.get_placeholders(name="train")
assert placeholders[0].get_shape().as_list() == batch_shape
assert placeholders[1].get_shape().as_list()[0] == batch_shape[0]
"""
Some unit tests that create networks on the fly
"""
batch_size = 16
validation_batch_size = 400
iterations = 50
seed = 10
directory = "./temp/cnn_scratch"
def scratch_network():
# Creating a random network
scratch = SequenceNetwork(default_feature_layer="fc1")
scratch.add(Conv2D(name="conv1", kernel_size=3,
filters=10,
activation=tf.nn.tanh,
batch_norm=False))
scratch.add(FullyConnected(name="fc1", output_dim=10,
activation=None,
batch_norm=False
))
return scratch
def validate_network(validation_data, validation_labels, network):
# Testing
validation_data_shuffler = Memory(validation_data, validation_labels,
input_shape=[28, 28, 1],
batch_size=validation_batch_size)
[data, labels] = validation_data_shuffler.get_batch()
predictions = network.predict(data)
accuracy = 100. * numpy.sum(predictions == labels) / predictions.shape[0]
return accuracy
......@@ -58,6 +58,7 @@ def test_dnn_trainer():
analizer=None,
prefetch=False,
learning_rate=constant(0.05, name="dnn_lr"),
optimizer=tf.train.AdamOptimizer(name="adam_dnn"),
temp_dir=directory)
trainer.train(train_data_shuffler)
......
......@@ -7,22 +7,18 @@ import logging
logger = logging.getLogger("bob.learn.tensorflow")
import tensorflow as tf
from tensorflow.core.framework import summary_pb2
import threading
from ..analyzers import ExperimentAnalizer, SoftmaxAnalizer
from ..network import SequenceNetwork
import bob.io.base
from .Trainer import Trainer
import os
import sys
from .learning_rate import constant
class SiameseTrainer(Trainer):
"""
Trainer for siamese networks.
**Parameters**
architecture:
The architecture that you want to run. Should be a :py:class`bob.learn.tensorflow.network.SequenceNetwork`
......@@ -38,7 +34,7 @@ class SiameseTrainer(Trainer):
temp_dir: str
The output directory
learning_rate: :py:class:`bob.learn.tensorflow.trainers.learningrate`
learning_rate: `bob.learn.tensorflow.trainers.learning_rate`
Initial learning rate
convergence_threshold:
......@@ -70,7 +66,7 @@ class SiameseTrainer(Trainer):
temp_dir="cnn",
# Learning rate
learning_rate=constant(),
learning_rate=None,
###### training options ##########
convergence_threshold=0.01,
......
......@@ -12,7 +12,7 @@ import bob.core
from ..analyzers import SoftmaxAnalizer
from tensorflow.core.framework import summary_pb2
import time
from bob.learn.tensorflow.datashuffler.OnlineSampling import OnLineSampling
from bob.learn.tensorflow.datashuffler import OnlineSampling
from bob.learn.tensorflow.utils.session import Session
from .learning_rate import constant
......@@ -25,6 +25,7 @@ class Trainer(object):
Use this trainer when your CNN is composed by one graph
**Parameters**
architecture:
The architecture that you want to run. Should be a :py:class`bob.learn.tensorflow.network.SequenceNetwork`
......@@ -40,7 +41,7 @@ class Trainer(object):
temp_dir: str
The output directory
learning_rate: :py:class:`bob.learn.tensorflow.trainers.learningrate`
learning_rate: `bob.learn.tensorflow.trainers.learning_rate`
Initial learning rate
convergence_threshold:
......@@ -72,7 +73,7 @@ class Trainer(object):
temp_dir="cnn",
# Learning rate
learning_rate=constant(),
learning_rate=None,
###### training options ##########
convergence_threshold=0.01,
......@@ -98,7 +99,10 @@ class Trainer(object):
self.loss = loss
self.temp_dir = temp_dir
self.learning_rate = learning_rate
if learning_rate is None and model_from_file == "":
self.learning_rate = constant()
else:
self.learning_rate = learning_rate
self.iterations = iterations
self.snapshot = snapshot
......@@ -383,8 +387,7 @@ class Trainer(object):
# Pickle the architecture to save
self.architecture.pickle_net(train_data_shuffler.deployment_shape)
Session.create()
self.session = Session.instance().session
self.session = Session.instance(new=True).session
# Loading a pretrained model
if self.model_from_file != "":
......@@ -419,7 +422,7 @@ class Trainer(object):
# Original tensorflow saver object
saver = tf.train.Saver(var_list=tf.all_variables())
if isinstance(train_data_shuffler, OnLineSampling):
if isinstance(train_data_shuffler, OnlineSampling):
train_data_shuffler.set_feature_extractor(self.architecture, session=self.session)
# Start a thread to enqueue data asynchronously, and hide I/O latency.
......
......@@ -23,6 +23,7 @@ class TripletTrainer(Trainer):
Trainer for Triple networks.
**Parameters**
architecture:
The architecture that you want to run. Should be a :py:class`bob.learn.tensorflow.network.SequenceNetwork`
......@@ -38,7 +39,7 @@ class TripletTrainer(Trainer):
temp_dir: str
The output directory
learning_rate: :py:class:`bob.learn.tensorflow.trainers.learningrate`
learning_rate: `bob.learn.tensorflow.trainers.learning_rate`
Initial learning rate
convergence_threshold:
......@@ -70,7 +71,7 @@ class TripletTrainer(Trainer):
temp_dir="cnn",
# Learning rate
learning_rate=constant(),
learning_rate=None,
###### training options ##########
convergence_threshold=0.01,
......
......@@ -29,12 +29,18 @@ class Singleton(object):
def create(self, *args, **kwargs):
"""Creates the singleton instance, by passing the given parameters to the class' constructor."""
# TODO: I still having problems in killing all the elements of the current session
if self._instance is not None:
self._instance.session.close()
del self._instance
self._instance = self._decorated(*args, **kwargs)
def instance(self):
def instance(self, new=False):
"""Returns the singleton instance.
The function :py:meth:`create` must have been called before."""
if self._instance is None:
if self._instance is None or new:
self.create()
return self._instance
......
......@@ -37,6 +37,7 @@ else:
# Be picky about warnings
nitpicky = True
keep_warnings = True
# Ignores stuff we can't easily resolve on other project's sphinx manuals
nitpick_ignore = []
......
......@@ -71,7 +71,7 @@ Data Shufflers
bob.learn.tensorflow.datashuffler.TripletMemory
bob.learn.tensorflow.datashuffler.TripletWithFastSelectionDisk
bob.learn.tensorflow.datashuffler.TripletWithSelectionDisk
bob.learn.tensorflow.datashuffler.OnLineSampling
bob.learn.tensorflow.datashuffler.OnlineSampling
......@@ -120,7 +120,6 @@ Detailed Information
.. automodule:: bob.learn.tensorflow.trainers
.. automodule:: bob.learn.tensorflow.layers
.. automodule:: bob.learn.tensorflow.datashuffler
.. automodule:: bob.learn.tensorflow.network
.. automodule:: bob.learn.tensorflow.analyzers
.. automodule:: bob.learn.tensorflow.initialization
.. automodule:: bob.learn.tensorflow.loss
\ No newline at end of file
.. vim: set fileencoding=utf-8 :
.. date: Thu Sep 20 11:58:57 CEST 2012
.. _bob.learn.tensorflow:
===========
User guide
......@@ -138,17 +137,17 @@ Type of the trainer?
Here we have one data shuffler for each type of the trainer.
You will see in the section `Trainers`_ that we have three types of trainer.
You will see in the section `Trainers <py_api.html#trainers>`_ that we have three types of trainer.
The first one is the regular trainer, which deals with one graph only (for example, if you training a network with
a softmax loss).
The data shuflers for this type of trainer must be a direct instance of either :py:class:`bob.learn.tensorflow.datashuffler.Memory`
or :py:class:`bob.learn.tensorflow.datashuffler.Disk`.
The second one is the :py:class:`bob.learn.tensorflow.trainers.Siamese` trainer, which is designed to train Siamese networks.
The second one is the :py:class:`bob.learn.tensorflow.trainers.SiameseTrainer` trainer, which is designed to train Siamese networks.
The data shuflers for this type of trainer must be a direct instance of either :py:class:`bob.learn.tensorflow.datashuffler.SiameseDisk` or
:py:class:`bob.learn.tensorflow.datashuffler.SiameseMemory`.
The third one is the :py:class:`bob.learn.tensorflow.trainers.Triplet` trainer, which is designed to train Triplet networks.
The third one is the :py:class:`bob.learn.tensorflow.trainers.TripletTrainer` trainer, which is designed to train Triplet networks.
The data shuflers for this type of trainer must be a direct instance of either :py:class:`bob.learn.tensorflow.datashuffler.TripletDisk`,
:py:class:`bob.learn.tensorflow.datashuffler.TripletMemory`, :py:class:`bob.learn.tensorflow.datashuffler.TripletWithFastSelectionDisk`
or :py:class:`bob.learn.tensorflow.datashuffler.TripletWithSelectionDisk`.
......@@ -159,7 +158,7 @@ How the data is sampled ?
The paper [facenet_2015]_ introduced a new strategy to select triplets to train triplet networks (this is better described
here :py:class:`bob.learn.tensorflow.datashuffler.TripletWithSelectionDisk` and :py:class:`bob.learn.tensorflow.datashuffler.TripletWithFastSelectionDisk`).
This triplet selection relies in the current state of the network and are extensions of :py:class:`bob.learn.tensorflow.datashuffler.OnlineSampling`.
This triplet selection relies in the current state of the network and are extensions of `bob.learn.tensorflow.datashuffler.OnlineSampling`.
Architecture
......@@ -213,7 +212,7 @@ Initialization
..............
We have implemented some strategies to initialize the tensorflow variables.
Check it out `Layers <py_api.html#initialization>`_.
Check it out `Initialization <py_api.html#initialization>`_.
Loss
......@@ -242,10 +241,10 @@ To be discussed.
Sandbox
-------
We have a sandbox of examples in a git repository `https://gitlab.idiap.ch/tiago.pereira/bob.learn.tensorflow_sandbox`_
We have a sandbox of examples in a git repository `https://gitlab.idiap.ch/tiago.pereira/bob.learn.tensorflow_sandbox`
The sandbox has some example of training:
- MNIST with softmax
- MNIST with Siamese Network
- MNIST with Triplet Network
- Face recognition with MOBIO database
- Face recognition with CASIA WebFace database
- MNIST with softmax
- MNIST with Siamese Network
- MNIST with Triplet Network
- Face recognition with MOBIO database
- Face recognition with CASIA WebFace database
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment