Documenting

parent 66a416bf
......@@ -31,12 +31,9 @@ class ExperimentAnalizer:
** Parameters **
data_shuffler:
graph:
session:
convergence_threshold:
convergence_reference: References to analize the convergence. Possible values are `eer`, `far10`, `far10`
convergence_threshold:
convergence_reference:
"""
......@@ -112,8 +109,10 @@ class ExperimentAnalizer:
- RANK 10
**Parameters**
negative_scores:
positive_scores:
negative_scores:
positive_scores:
"""
summaries = []
......
......@@ -15,14 +15,30 @@ class Base(object):
The class provide base functionalities to shuffle the data to train a neural network
**Parameters**
data: Input data to be trainer
labels: Labels. These labels should be set from 0..1
input_shape: The shape of the inputs
input_dtype: The type of the data,
batch_size: Batch size
seed: The seed of the random number generator
data_augmentation: The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer: The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
data:
Input data to be trainer
labels:
Labels. These labels should be set from 0..1
input_shape:
The shape of the inputs
input_dtype:
The type of the data,
batch_size:
Batch size
seed:
The seed of the random number generator
data_augmentation:
The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer:
The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
"""
def __init__(self, data, labels,
......
......@@ -21,14 +21,29 @@ class Disk(Base):
The data is loaded on the fly,.
**Parameters**
data: Input data to be trainer
labels: Labels. These labels should be set from 0..1
input_shape: The shape of the inputs
input_dtype: The type of the data,
batch_size: Batch size
seed: The seed of the random number generator
data_augmentation: The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer: The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
data:
Input data to be trainer
labels:
Labels. These labels should be set from 0..1
input_shape:
The shape of the inputs
input_dtype:
The type of the data,
batch_size:
Batch size
seed:
The seed of the random number generator
data_augmentation:
The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer:
The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
"""
def __init__(self, data, labels,
......@@ -88,8 +103,11 @@ class Disk(Base):
** Returns **
data: Selected samples
labels: Correspondent labels
data:
Selected samples
labels:
Correspondent labels
"""
# Shuffling samples
......
......@@ -15,14 +15,29 @@ class Memory(Base):
This datashuffler deal with memory databases that are stored in a :py:class`numpy.array`
**Parameters**
data: Input data to be trainer
labels: Labels. These labels should be set from 0..1
input_shape: The shape of the inputs
input_dtype: The type of the data,
batch_size: Batch size
seed: The seed of the random number generator
data_augmentation: The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer: The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
data:
Input data to be trainer
labels:
Labels. These labels should be set from 0..1
input_shape:
The shape of the inputs
input_dtype:
The type of the data,
batch_size:
Batch size
seed:
The seed of the random number generator
data_augmentation:
The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer:
The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
"""
def __init__(self, data, labels,
......@@ -53,8 +68,11 @@ class Memory(Base):
** Returns **
data: Selected samples
labels: Correspondent labels
data:
Selected samples
labels:
Correspondent labels
"""
# Shuffling samples
indexes = numpy.array(range(self.data.shape[0]))
......
......@@ -19,14 +19,31 @@ class SiameseDisk(Siamese, Disk):
The data is loaded on the fly,.
**Parameters**
data: Input data to be trainer
labels: Labels. These labels should be set from 0..1
input_shape: The shape of the inputs
input_dtype: The type of the data,
batch_size: Batch size
seed: The seed of the random number generator
data_augmentation: The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer: The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
data:
Input data to be trainer
labels:
Labels. These labels should be set from 0..1
input_shape:
The shape of the inputs
input_dtype:
The type of the data,
batch_size:
Batch size
seed:
The seed of the random number generator
data_augmentation:
The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer:
The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
"""
def __init__(self, data, labels,
input_shape,
......
......@@ -17,14 +17,31 @@ class SiameseMemory(Siamese, Memory):
The data is loaded on the fly.
**Parameters**
data: Input data to be trainer
labels: Labels. These labels should be set from 0..1
input_shape: The shape of the inputs
input_dtype: The type of the data,
batch_size: Batch size
seed: The seed of the random number generator
data_augmentation: The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer: The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
data:
Input data to be trainer
labels:
Labels. These labels should be set from 0..1
input_shape:
The shape of the inputs
input_dtype:
The type of the data,
batch_size:
Batch size
seed:
The seed of the random number generator
data_augmentation:
The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer:
The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
"""
def __init__(self, data, labels,
......
......@@ -23,14 +23,31 @@ class TripletDisk(Triplet, Disk):
The data is loaded on the fly.
**Parameters**
data: Input data to be trainer
labels: Labels. These labels should be set from 0..1
input_shape: The shape of the inputs
input_dtype: The type of the data,
batch_size: Batch size
seed: The seed of the random number generator
data_augmentation: The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer: The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
data:
Input data to be trainer
labels:
Labels. These labels should be set from 0..1
input_shape:
The shape of the inputs
input_dtype:
The type of the data,
batch_size:
Batch size
seed:
The seed of the random number generator
data_augmentation:
The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer:
The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
"""
def __init__(self, data, labels,
......
......@@ -17,14 +17,31 @@ class TripletMemory(Triplet, Memory):
The data is loaded on the fly.
**Parameters**
data: Input data to be trainer
labels: Labels. These labels should be set from 0..1
input_shape: The shape of the inputs
input_dtype: The type of the data,
batch_size: Batch size
seed: The seed of the random number generator
data_augmentation: The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer: The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
data:
Input data to be trainer
labels:
Labels. These labels should be set from 0..1
input_shape:
The shape of the inputs
input_dtype:
The type of the data,
batch_size:
Batch size
seed:
The seed of the random number generator
data_augmentation:
The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer:
The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
"""
def __init__(self, data, labels,
......
......@@ -35,14 +35,30 @@ class TripletWithFastSelectionDisk(Triplet, Disk, OnLineSampling):
argmin(||f(x_a) - f(x_p)||^2 < ||f(x_a) - f(x_n)||^2
**Parameters**
data: Input data to be trainer
labels: Labels. These labels should be set from 0..1
input_shape: The shape of the inputs
input_dtype: The type of the data,
batch_size: Batch size
seed: The seed of the random number generator
data_augmentation: The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer: The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
data:
Input data to be trainer
labels:
Labels. These labels should be set from 0..1
input_shape:
The shape of the inputs
input_dtype:
The type of the data,
batch_size:
Batch size
seed:
The seed of the random number generator
data_augmentation:
The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer:
The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
"""
def __init__(self, data, labels,
......
......@@ -23,14 +23,30 @@ class TripletWithSelectionDisk(Triplet, Disk, OnLineSampling):
The selection of the triplets are random.
**Parameters**
data: Input data to be trainer
labels: Labels. These labels should be set from 0..1
input_shape: The shape of the inputs
input_dtype: The type of the data,
batch_size: Batch size
seed: The seed of the random number generator
data_augmentation: The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer: The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
data:
Input data to be trainer
labels:
Labels. These labels should be set from 0..1
input_shape:
The shape of the inputs
input_dtype:
The type of the data,
batch_size:
Batch size
seed:
The seed of the random number generator
data_augmentation:
The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer:
The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
"""
def __init__(self, data, labels,
......
......@@ -32,14 +32,31 @@ class TripletWithSelectionMemory(Triplet, Memory, OnLineSampling):
argmin(||f(x_a) - f(x_p)||^2 < ||f(x_a) - f(x_n)||^2
**Parameters**
data: Input data to be trainer
labels: Labels. These labels should be set from 0..1
input_shape: The shape of the inputs
input_dtype: The type of the data,
batch_size: Batch size
seed: The seed of the random number generator
data_augmentation: The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer: The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
data:
Input data to be trainer
labels:
Labels. These labels should be set from 0..1
input_shape:
The shape of the inputs
input_dtype:
The type of the data,
batch_size:
Batch size
seed:
The seed of the random number generator
data_augmentation:
The algorithm used for data augmentation. Look :py:class:`bob.learn.tensorflow.datashuffler.DataAugmentation`
normalizer:
The algorithm used for feature scaling. Look :py:class:`bob.learn.tensorflow.datashuffler.ScaleFactor`, :py:class:`bob.learn.tensorflow.datashuffler.Linear` and :py:class:`bob.learn.tensorflow.datashuffler.MeanOffset`
"""
def __init__(self, data, labels,
......
......@@ -13,11 +13,22 @@ class AveragePooling(MaxPooling):
Wraps the tensorflow average pooling
**Parameters**
name: The name of the layer
shape: Shape of the pooling kernel
stride: Shape of the stride
batch_norm: Do batch norm?
activation: Tensor Flow activation
name: str
The name of the layer
shape:
Shape of the pooling kernel
stride:
Shape of the stride
batch_norm: bool
Do batch norm?
activation: bool
Tensor Flow activation
"""
def __init__(self, name, shape=[1, 2, 2, 1],
......
......@@ -15,15 +15,34 @@ class Conv2D(Layer):
2D Convolution
**Parameters**
name: The name of the layer
activation: Tensor Flow activation
kernel_size: Size of the convolutional kernel
filters: Number of filters
stride: Shape of the stride
weights_initialization: Initialization type for the weights
bias_initialization: Initialization type for the weights
batch_norm: Do batch norm?
use_gpu: Store data in the GPU
name: str
The name of the layer
activation:
Tensor Flow activation
kernel_size: int
Size of the convolutional kernel
filters: int
Number of filters
stride:
Shape of the stride
weights_initialization: py:class:`bob.learn.tensorflow.initialization.Initialization`
Initialization type for the weights
bias_initialization: py:class:`bob.learn.tensorflow.initialization.Initialization`
Initialization type for the weights
batch_norm: bool
Do batch norm?
use_gpu: bool
Store data in the GPU
"""
def __init__(self, name, activation=None,
......
......@@ -14,8 +14,12 @@ class Dropout(Layer):
Dropout
**Parameters**
name: The name of the layer
keep_prob: With probability keep_prob, outputs the input element scaled up by 1 / keep_prob, otherwise outputs 0.
name: str
The name of the layer
keep_prob: float
With probability keep_prob, outputs the input element scaled up by 1 / keep_prob, otherwise outputs 0.
"""
def __init__(self, name,
......
......@@ -16,14 +16,29 @@ class FullyConnected(Layer):
Fully Connected layer
**Parameters**
name: The name of the layer
output_dim: Size of the output
activation: Tensor Flow activation
weights_initialization: Initialization type for the weights
bias_initialization: Initialization type for the weights
batch_norm: Do batch norm?
use_gpu: Store data in the GPU
"""
name: str
The name of the layer
output_dim: int
Size of the output
activation:
Tensor Flow activation
weights_initialization: py:class:`bob.learn.tensorflow.initialization.Initialization`
Initialization type for the weights
bias_initialization: py:class:`bob.learn.tensorflow.initialization.Initialization`
Initialization type for the weights
batch_norm: bool
Do batch norm?
use_gpu: bool
Store data in the GPU
"""
def __init__(self, name,
output_dim,
......
......@@ -14,12 +14,23 @@ class Layer(object):
Layer base class
**Parameters**
name: The name of the layer
activation: Tensor Flow activation
weights_initialization: Initialization type for the weights
bias_initialization: Initialization type for the weights
batch_norm: Do batch norm?
use_gpu: Store data in the GPU
name: str
The name of the layer
activation:
Tensor Flow activation
weights_initialization: py:class:`bob.learn.tensorflow.initialization.Initialization`
Initialization type for the weights
bias_initialization: py:class:`bob.learn.tensorflow.initialization.Initialization`
Initialization type for the biases
batch_norm: bool
Do batch norm?
use_gpu: bool
Store data in the GPU
"""
......
......@@ -12,11 +12,22 @@ class MaxPooling(Layer):
Wraps the tensorflow max pooling
**Parameters**
name: The name of the layer
shape: Shape of the pooling kernel
stride: Shape of the stride
batch_norm: Do batch norm?
activation: Tensor Flow activation
name: str
The name of the layer
shape:
Shape of the pooling kernel
stride:
Shape of the stride
batch_norm: bool
Do batch norm?
activation: bool
Tensor Flow activation
"""
def __init__(self, name, shape=[1, 2, 2, 1],
......
......@@ -20,10 +20,17 @@ class ContrastiveLoss(BaseLoss):
L = 0.5 * (Y) * D^2 + 0.5 * (1-Y) * {max(0, margin - D)}^2
**Parameters**
left_feature: First element of the pair
right_feature: Second element of the pair
label: Label of the pair (0 or 1)
margin: Contrastive margin
left_feature:
First element of the pair
right_feature:
Second element of the pair
label:
Label of the pair (0 or 1)
margin:
Contrastive margin
"""
......
......@@ -22,10 +22,18 @@ class TripletLoss(BaseLoss):
L = sum( |f_a - f_p|^2 - |f_a - f_n|^2 + \lambda)
**Parameters**
left_feature: First element of the pair
right_feature: Second element of the pair
label: Label of the pair (0 or 1)
margin: Contrastive margin
left_feature:
First element of the pair
right_feature:
Second element of the pair
label:
Label of the pair (0 or 1)
margin:
Contrastive margin
"""
def __init__(self, margin=5.0):
......
......@@ -322,8 +322,8 @@ class SequenceNetwork(six.with_metaclass(abc.ABCMeta, object)):
session = Session.instance().session
self.sequence_net = pickle.loads(open(path+"_sequence_net.pickle").read())
#saver = tf.train.import_meta_graph(path + ".meta", clear_devices=clear_devices)
saver = tf.train.import_meta_graph(path + ".meta")
saver = tf.train.import_meta_graph(path + ".meta", clear_devices=clear_devices)
#saver = tf.train.import_meta_graph(path + ".meta")
saver.restore(session, path)
self.inference_graph = tf.get_collection("inference_graph")[0]
self.inference_placeholder = tf.get_collection("inference_placeholder")[0]
......
......@@ -103,6 +103,7 @@ def test_cnn_pretrained():
temp_dir=directory2,
model_from_file=os.path.join(directory, "model.ckp"))
#import ipdb; ipdb.set_trace();
trainer.train(train_data_shuffler)
accuracy = validate_network(validation_data, validation_labels, scratch)
......
......@@ -23,21 +23,42 @@ class SiameseTrainer(Trainer):
Trainer for siamese networks.
**Parameters**
architecture: The architecture that you want to run. Should be a :py:class`bob.learn.tensorflow.network.SequenceNetwork`
optimizer: One of the tensorflow optimizers https://www.tensorflow.org/versions/r0.10/api_docs/python/train.html
use_gpu: Use GPUs in the training
loss: Loss
temp_dir: The output directory
base_learning_rate: Initial learning rate