Commit 5895859e authored by Tiago de Freitas Pereira's avatar Tiago de Freitas Pereira
Browse files

Documenting

parent c3975a2c
......@@ -359,6 +359,45 @@ In the previous section, the concept of a `machine` was introduced. A `machine`
is fed by some input data, processes it and returns an output. Machines can be
learnt using trainers in |project|.
Expectation Maximization
========================
Each one of the following trainers has their own `initialize`, `eStep` and `mStep` methods in order to train the respective machines.
For example, to train a K-Means with 10 iterations you can use the following steps.
.. doctest::
:options: +NORMALIZE_WHITESPACE
>>> data = numpy.array([[3,-3,100], [4,-4,98], [3.5,-3.5,99], [-7,7,-100], [-5,5,-101]], dtype='float64') #Data
>>> kmeans_machine = bob.learn.em.KMeansMachine(2, 3) # Create a machine with k=2 clusters with a dimensionality equal to 3
>>> kmeans_trainer = bob.learn.em.KMeansTrainer() #Creating the k-means machine
>>> max_iterations = 10
>>> kmeans_trainer.initialize(kmeans_machine, data) #Initilizing the means with random values
>>> for i in range(max_iterations):
... kmeans_trainer.eStep(kmeans_machine, data)
... kmeans_trainer.mStep(kmeans_machine, data)
>>> print(kmeans_machine.means)
[[ -6. 6. -100.5]
[ 3.5 -3.5 99. ]]
With that granularity you can train your K-Means (or any trainer procedure) with your own convergence criteria.
Furthermore, to make the things even simpler, it is possible to train the K-Means (and have the same example as above) using the wrapper :py:method:`bob.learn.em.train` as in the example below:
.. doctest::
:options: +NORMALIZE_WHITESPACE
>>> data = numpy.array([[3,-3,100], [4,-4,98], [3.5,-3.5,99], [-7,7,-100], [-5,5,-101]], dtype='float64') #Data
>>> kmeans_machine = bob.learn.em.KMeansMachine(2, 3) # Create a machine with k=2 clusters with a dimensionality equal to 3
>>> kmeans_trainer = bob.learn.em.KMeansTrainer() #Creating the k-means machine
>>> max_iterations = 10
>>> bob.learn.em.train(kmeans_trainer, kmeans_machine, data, max_iterations = 10) #wrapper for the em trainer
>>> print(kmeans_machine.means)
[[ -6. 6. -100.5]
[ 3.5 -3.5 99. ]]
K-means
=======
......
.. vim: set fileencoding=utf-8 :
.. Andre Anjos <andre.anjos@idiap.ch>
.. Fri 13 Dec 2013 12:50:06 CET
.. Tiago de Freitas Pereira <tiago.pereira@idiap.ch>
.. Tue 17 Feb 2015 13:50:06 CET
..
.. Copyright (C) 2011-2014 Idiap Research Institute, Martigny, Switzerland
.. _bob.learn.em:
======================================
Miscellaneous Machine Learning Tools
======================================
================================================
Expectation Maximization Machine Learning Tools
================================================
.. todolist::
The EM algorithm is an iterative method that estimates parameters for a statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step [WikiEM]_.
This package includes various machine learning utitilities which have not yet
been ported into the new framework.
This package contains a set of Pythonic bindings for Bob's Machines and Trainers.
Documentation
-------------
......@@ -34,6 +33,7 @@ References
.. [ElShafey2014] *Laurent El Shafey, Chris McCool, Roy Wallace, Sebastien Marcel*. **'A Scalable Formulation of Probabilistic Linear Discriminant Analysis: Applied to Face Recognition'**, TPAMI'2014
.. [PrinceElder2007] *Prince and Elder*. **'Probabilistic Linear Discriminant Analysis for Inference About Identity'**, ICCV'2007
.. [LiFu2012] *Li, Fu, Mohammed, Elder and Prince*. **'Probabilistic Models for Inference about Identity'**, TPAMI'2012
.. [WikiEM] `Expectation Maximization <http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm>`_
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment