-
Manuel Günther authoredManuel Günther authored
User guide
This section includes the machine/trainer guides for learning techniques available in this package.
Machines
Machines are one of the core components of |project|. They represent statistical models or other functions defined by parameters that can be learnt or set by using Trainers.
K-means machines
k-means is a clustering method which aims to partition a set of observations into k clusters. The training procedure is described further below. Here, we explain only how to use the resulting machine. For the sake of example, we create a new :py:class:`bob.learn.misc.KMeansMachine` as follows:
Then, given some input data, it is possible to determine to which cluster the data is the closest as well as the min distance.
Gaussian machines
The :py:class:`bob.learn.misc.Gaussian` represents a multivariate diagonal Gaussian (or normal) distribution. In this context, a diagonal Gaussian refers to the covariance matrix of the distribution being diagonal. When the covariance matrix is diagonal, each variable in the distribution is independent of the others.
Objects of this class are normally used as building blocks for more complex :py:class:`bob.learn.misc.GMMMachine` or GMM objects, but can also be used individually. Here is how to create one multivariate diagonal Gaussian distribution:
>>> g = bob.learn.misc.Gaussian(2) #bi-variate diagonal normal distribution
>>> g.mean = numpy.array([0.3, 0.7], 'float64')
>>> g.mean
array([ 0.3, 0.7])
>>> g.variance = numpy.array([0.2, 0.1], 'float64')
>>> g.variance
array([ 0.2, 0.1])
Once the :py:class:`bob.learn.misc.Gaussian` has been set, you can use it to estimate the log-likelihood of an input feature vector with a matching number of dimensions:
>>> log_likelihood = g(numpy.array([0.4, 0.4], 'float64'))
As with other machines you can save and re-load machines of this type using :py:meth:`bob.learn.misc.Gaussian.save` and the class constructor respectively.
Gaussian mixture models
The :py:class:`bob.learn.misc.GMMMachine` represents a Gaussian mixture model (GMM), which consists of a mixture of weighted :py:class:`bob.learn.misc.Gaussian`s.
>>> gmm = bob.learn.misc.GMMMachine(2,3) # Mixture of two diagonal Gaussian of dimension 3
By default, the diagonal Gaussian distributions of the GMM are initialized with zero mean and unit variance, and the weights are identical. This can be updated using the :py:attr:`bob.learn.misc.GMMMachine.means`, :py:attr:`bob.learn.misc.GMMMachine.variances` or :py:attr:`bob.learn.misc.GMMMachine.weights`.
Once the :py:class:`bob.learn.misc.GMMMachine` has been set, you can use it to estimate the log-likelihood of an input feature vector with a matching number of dimensions:
>>> log_likelihood = gmm(numpy.array([5.1, 4.7, -4.9], 'float64'))
As with other machines you can save and re-load machines of this type using :py:meth:`bob.learn.misc.GMMMachine.save` and the class constructor respectively.
Gaussian mixture models Statistics
The :py:class:`bob.learn.misc.GMMStats` is a container for the sufficient statistics of a GMM distribution.
Given a GMM, the sufficient statistics of a sample can be computed as follows:
Then, the sufficient statistics can be accessed (or set as below), by considering the following attributes.
Joint Factor Analysis
Joint Factor Analysis (JFA) [1] [2] is a session variability modelling technique built on top of the Gaussian mixture modelling approach. It utilises a within-class subspace U, a between-class subspace V, and a subspace for the residuals D to capture and suppress a significant portion of between-class variation.
An instance of :py:class:`bob.learn.misc.JFABase` carries information about the matrices U, V and D, which can be shared between several classes. In contrast, after the enrolment phase, an instance of :py:class:`bob.learn.misc.JFAMachine` carries class-specific information about the latent variables y and z.
An instance of :py:class:`bob.learn.misc.JFABase` can be initialized as follows, given an existing GMM:
Next, this :py:class:`bob.learn.misc.JFABase` can be shared by several instances of :py:class:`bob.learn.misc.JFAMachine`, the initialization being as follows:
Once the :py:class:`bob.learn.misc.JFAMachine` has been configured for a specific class, the log-likelihood (score) that an input sample belongs to the enrolled class, can be estimated, by first computing the GMM sufficient statistics of this input sample, and then calling the :py:meth:`bob.learn.misc.JFAMachine.forward` on the sufficient statistics.
As with other machines you can save and re-load machines of this type using :py:meth:`bob.learn.misc.JFAMachine.save` and the class constructor respectively.
Inter-Session Variability
Similarly to Joint Factor Analysis, Inter-Session Variability (ISV) modelling [3] [2] is another session variability modelling technique built on top of the Gaussian mixture modelling approach. It utilises a within-class subspace U and a subspace for the residuals D to capture and suppress a significant portion of between-class variation. The main difference compared to JFA is the absence of the between-class subspace V.
Similarly to JFA, an instance of :py:class:`bob.learn.misc.JFABase` carries information about the matrices U and D, which can be shared between several classes, whereas an instance of :py:class:`bob.learn.misc.JFAMachine` carries class-specific information about the latent variable z.
An instance of :py:class:`bob.learn.misc.ISVBase` can be initialized as follows, given an existing GMM:
Next, this :py:class:`bob.learn.misc.ISVBase` can be shared by several instances of :py:class:`bob.learn.misc.ISVMachine`, the initialization being as follows:
Once the :py:class:`bob.learn.misc.ISVMachine` has been configured for a specific class, the log-likelihood (score) that an input sample belongs to the enrolled class, can be estimated, by first computing the GMM sufficient statistics of this input sample, and then calling the :py:meth:`bob.learn.misc.ISVMachine.forward` on the sufficient statistics.
As with other machines you can save and re-load machines of this type using :py:meth:`bob.learn.misc.ISVMachine.save` and the class constructor respectively.
Total Variability (i-vectors)
Total Variability (TV) modelling [4] is a front-end initially introduced for
speaker recognition, which aims at describing samples by vectors of low
dimensionality called i-vectors
. The model consists of a subspace T
and a residual diagonal covariance matrix \Sigma, that are then used to
extract i-vectors, and is built upon the GMM approach.
An instance of the class :py:class:`bob.learn.misc.IVectorMachine` carries information about these two matrices. This can be initialized as follows:
Once the :py:class:`bob.learn.misc.IVectorMachine` has been set, the extraction of an i-vector w_{ij} can be done in two steps, by first extracting the GMM sufficient statistics, and then estimating the i-vector:
As with other machines you can save and re-load machines of this type using :py:meth:`bob.learn.misc.IVectorMachine.save` and the class constructor respectively.
Probabilistic Linear Discriminant Analysis (PLDA)
Probabilistic Linear Discriminant Analysis [5] [6] is a probabilistic model that incorporates components describing both between-class and within-class variations. Given a mean \mu, between-class and within-class subspaces F and G and residual noise \epsilon with zero mean and diagonal covariance matrix \Sigma, the model assumes that a sample x_{i,j} is generated by the following process:
x_{i,j} = \mu + F h_{i} + G w_{i,j} + \epsilon_{i,j}
Information about a PLDA model (\mu, F, G and \Sigma) are carried out by an instance of the class :py:class:`bob.learn.misc.PLDABase`.
>>> ### This creates a PLDABase container for input feature of dimensionality 3,
>>> ### and with subspaces F and G of rank 1 and 2 respectively.
>>> pldabase = bob.learn.misc.PLDABase(3,1,2)
Class-specific information (usually from enrollment samples) are contained in an instance of :py:class:`bob.learn.misc.PLDAMachine`, that must be attached to a given :py:class:`bob.learn.misc.PLDABase`. Once done, log-likelihood computations can be performed.
>>> plda = bob.learn.misc.PLDAMachine(pldabase)
>>> samples = numpy.array([[3.5,-3.4,102], [4.5,-4.3,56]], dtype=numpy.float64)
>>> loglike = plda.compute_log_likelihood(samples)
Trainers
In the previous section, the concept of a machine was introduced. A machine is fed by some input data, processes it and returns an output. Machines can be learnt using trainers in |project|.
K-means
k-means [7] is a clustering method, which aims to partition a set of observations into k clusters. This is an unsupervised technique. As for PCA [1], which is implemented in the :py:class:`bob.learn.linear.PCATrainer` class, the training data is passed in a 2D :py:class:`numpy.ndarray` container.
The training procedure will learn the means for the :py:class:`bob.learn.misc.KMeansMachine`. The number k of means is given when creating the machine, as well as the dimensionality of the features.
Then training procedure for k-means is an Expectation-Maximization-based [8] algorithm. There are several options that can be set such as the maximum number of iterations and the criterion used to determine if the convergence has occurred. After setting all of these options, the training procedure can then be called.
Maximum likelihood for Gaussian mixture model
A Gaussian mixture model (GMM) [9] is a common probabilistic model. In order to train the parameters of such a model it is common to use a maximum-likelihood (ML) approach [10]. To do this we use an Expectation-Maximization (EM) algorithm [8]. Let's first start by creating a :py:class:`bob.learn.misc.GMMMachine`. By default, all of the Gaussian's have zero-mean and unit variance, and all the weights are equal. As a starting point, we could set the mean to the one obtained with k-means [7].
The |project| class to learn the parameters of a GMM [9] using ML [10] is :py:class:`bob.learn.misc.ML_GMMTrainer`. It uses an EM-based [8] algorithm and requires the user to specify which parameters of the GMM are updated at each iteration (means, variances and/or weights). In addition, and as for k-means [7], it has parameters such as the maximum number of iterations and the criterion used to determine if the parameters have converged.
MAP-adaptation for Gaussian mixture model
|project| also supports the training of GMMs [9] using a maximum a posteriori (MAP) approach [11]. MAP is closely related to the ML [10] technique but it incorporates a prior on the quantity that we want to estimate. In our case, this prior is a GMM [9]. Based on this prior model and some training data, a new model, the MAP estimate, will be adapted.
Let's consider that the previously trained GMM [9] is our prior model.
The training data used to compute the MAP estimate [11] is again stored in a 2D :py:class:`numpy.ndarray` container.
The |project| class used to perform MAP adaptation training [11] is :py:class:`bob.learn.misc.MAP_GMMTrainer`. As with the ML estimate [10], it uses an EM-based [8] algorithm and requires the user to specify which parts of the GMM are adapted at each iteration (means, variances and/or weights). In addition, it also has parameters such as the maximum number of iterations and the criterion used to determine if the parameters have converged, in addition to this there is also a relevance factor which indicates the importance we give to the prior. Once the trainer has been created, a prior GMM [9] needs to be set.
Joint Factor Analysis
The training of the subspace U, V and D of a Joint Factor Analysis model, is performed in two steps. First, GMM sufficient statistics of the training samples should be computed against the UBM GMM. Once done, we get a training set of GMM statistics:
In the following, we will allocate a :py:class:`bob.learn.misc.JFABase` machine, that will then be trained.
Next, we initialize a trainer, which is an instance of :py:class:`bob.learn.misc.JFATrainer`, as follows:
The training process is started by calling the :py:meth:`bob.learn.misc.JFATrainer.train`.
Once the training is finished (i.e. the subspaces U, V and D are estimated), the JFA model can be shared and used by several class-specific models. As for the training samples, we first need to extract GMM statistics from the samples. These GMM statistics are manually defined in the following.
Class-specific enrollment can then be perfomed as follows. This will estimate the class-specific latent variables y and z:
More information about the training process can be found in [12] and [13].
Inter-Session Variability
The training of the subspace U and D of an Inter-Session Variability model, is performed in two steps. As for JFA, GMM sufficient statistics of the training samples should be computed against the UBM GMM. Once done, we get a training set of GMM statistics. Next, we will allocate an :py:class:`bob.learn.misc.ISVBase` machine, that will then be trained.
Next, we initialize a trainer, which is an instance of :py:class:`bob.learn.misc.ISVTrainer`, as follows:
The training process is started by calling the :py:meth:`bob.learn.misc.ISVTrainer.train`.
Once the training is finished (i.e. the subspaces V and D are estimated), the ISV model can be shared and used by several class-specific models. As for the training samples, we first need to extract GMM statistics from the samples. Class-specific enrollment can then be perfomed, which will estimate the class-specific latent variable z:
More information about the training process can be found in [14] and [13].
Total Variability (i-vectors)
The training of the subspace T and \Sigma of a Total Variability model, is performed in two steps. As for JFA and ISV, GMM sufficient statistics of the training samples should be computed against the UBM GMM. Once done, we get a training set of GMM statistics. Next, we will allocate an instance of :py:class:`bob.learn.misc.IVectorMachine`, that will then be trained.
Next, we initialize a trainer, which is an instance of :py:class:`bob.learn.misc.IVectorTrainer`, as follows:
The training process is started by calling the :py:meth:`bob.learn.misc.IVectorTrainer.train`.
More information about the training process can be found in [15].