Skip to content
Snippets Groups Projects

MLPAlgorithm PAD algorithm V1 version

Merged Olegs NIKISINS requested to merge mlp_algorithm into master
Files
5
@@ -6,8 +6,6 @@
#==============================================================================
# Import here:
import numpy as np
import torch
@@ -16,17 +14,14 @@ import torch
"""
Transformations to be applied to the input 1D numpy arrays (feature vectors).
Here, for demonstrative purposes, the transformation is mean std-normalization,
where mean and std values are just numpy generated vectors. In real applications,
normalizers must be computed in the meaningfull way. This config is just for
test purposes.
Only conversion to Tensor and unsqueezing is needed to match the input of
TwoLayerMLP network
"""
def transform(x):
"""
Transformation function applying dummy mean-std normalization and converting
input numpy feature vectors to PyTorch tensors, making them compatible with
MLP.
Convert input to Tensor and unsqueeze to match the input of
TwoLayerMLP network.
Arguments
---------
@@ -39,19 +34,7 @@ def transform(x):
Torch tensor, transformed ``x`` to be used as MLP input.
"""
features_mean = np.zeros(x.shape)
features_std = np.ones(x.shape)
row_norm_list = []
for row in x: # row is a sample
row_norm = (row - features_mean) / features_std
row_norm_list.append(row_norm)
x_norm = np.vstack(row_norm_list)
x_norm.squeeze()
return torch.Tensor(x_norm).unsqueeze(0)
return torch.Tensor(x).unsqueeze(0)
"""
@@ -66,7 +49,7 @@ from bob.learn.pytorch.architectures import TwoLayerMLP as Network
kwargs to be used for ``Network`` initialization. The name must be ``network_kwargs``.
"""
network_kwargs = {}
network_kwargs['in_features'] = 100
network_kwargs['in_features'] = 1296
network_kwargs['n_hidden_relu'] = 10
network_kwargs['apply_sigmoid'] = False # don't use sigmoid to make the scores more even
Loading