"""An MLP is a representation of a Multi-Layer Perceptron.
This implementation is feed-forward and fully-connected.
The implementation allows setting a global and the output activation functions.
References to fully-connected feed-forward networks: Bishop's Pattern Recognition and Machine Learning, Chapter 5. Figure 5.1 shows what is programmed.
MLPs normally are multi-layered systems, with 1 or more hidden layers.
**Parameters**
output_shape: number of neurons in the output.
hidden_layers: :py:class:`list` that contains the amount of hidden layers, where each element is the number of neurons
hidden_activation: Activation function of the hidden layers. Possible values can be seen