From 5895859e19c89285767f952d8ad05975301a2f98 Mon Sep 17 00:00:00 2001
From: Tiago Freitas Pereira <tiagofrepereira@gmail.com>
Date: Tue, 17 Feb 2015 18:28:49 +0100
Subject: [PATCH] Documenting

---
 doc/guide.rst | 39 +++++++++++++++++++++++++++++++++++++++
 doc/index.rst | 16 ++++++++--------
 2 files changed, 47 insertions(+), 8 deletions(-)

diff --git a/doc/guide.rst b/doc/guide.rst
index 0c11a33..cfb2d76 100644
--- a/doc/guide.rst
+++ b/doc/guide.rst
@@ -359,6 +359,45 @@ In the previous section, the concept of a `machine` was introduced. A `machine`
 is fed by some input data, processes it and returns an output. Machines can be
 learnt using trainers in |project|.
 
+Expectation Maximization
+========================
+
+Each one of the following trainers has their own `initialize`, `eStep` and `mStep` methods in order to train the respective machines.
+For example, to train a K-Means with 10 iterations you can use the following steps.
+
+.. doctest::
+   :options: +NORMALIZE_WHITESPACE
+
+   >>> data           = numpy.array([[3,-3,100], [4,-4,98], [3.5,-3.5,99], [-7,7,-100], [-5,5,-101]], dtype='float64') #Data
+   >>> kmeans_machine = bob.learn.em.KMeansMachine(2, 3) # Create a machine with k=2 clusters with a dimensionality equal to 3
+   >>> kmeans_trainer = bob.learn.em.KMeansTrainer() #Creating the k-means machine
+   >>> max_iterations = 10
+   >>> kmeans_trainer.initialize(kmeans_machine, data) #Initilizing the means with random values
+   >>> for i in range(max_iterations):
+   ...   kmeans_trainer.eStep(kmeans_machine, data)
+   ...   kmeans_trainer.mStep(kmeans_machine, data)
+   >>> print(kmeans_machine.means)
+   [[  -6.     6.  -100.5]
+   [   3.5   -3.5   99. ]]
+
+
+With that granularity you can train your K-Means (or any trainer procedure) with your own convergence criteria.
+Furthermore, to make the things even simpler, it is possible to train the K-Means (and have the same example as above) using the wrapper :py:method:`bob.learn.em.train` as in the example below:
+
+.. doctest::
+   :options: +NORMALIZE_WHITESPACE
+
+   >>> data           = numpy.array([[3,-3,100], [4,-4,98], [3.5,-3.5,99], [-7,7,-100], [-5,5,-101]], dtype='float64') #Data
+   >>> kmeans_machine = bob.learn.em.KMeansMachine(2, 3) # Create a machine with k=2 clusters with a dimensionality equal to 3
+   >>> kmeans_trainer = bob.learn.em.KMeansTrainer() #Creating the k-means machine   
+   >>> max_iterations = 10
+   >>> bob.learn.em.train(kmeans_trainer, kmeans_machine, data, max_iterations = 10) #wrapper for the em trainer
+   >>> print(kmeans_machine.means)
+   [[  -6.     6.  -100.5]
+   [   3.5   -3.5   99. ]]
+
+
+
 K-means
 =======
 
diff --git a/doc/index.rst b/doc/index.rst
index a160f0a..5592eba 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -1,19 +1,18 @@
 .. vim: set fileencoding=utf-8 :
-.. Andre Anjos <andre.anjos@idiap.ch>
-.. Fri 13 Dec 2013 12:50:06 CET
+.. Tiago de Freitas Pereira <tiago.pereira@idiap.ch>
+.. Tue 17 Feb 2015 13:50:06 CET
 ..
 .. Copyright (C) 2011-2014 Idiap Research Institute, Martigny, Switzerland
 
 .. _bob.learn.em:
 
-======================================
- Miscellaneous Machine Learning Tools
-======================================
+================================================
+ Expectation Maximization Machine Learning Tools
+================================================
 
-.. todolist::
+The EM algorithm is an iterative method that estimates parameters for a statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step [WikiEM]_. 
 
-This package includes various machine learning utitilities which have not yet
-been ported into the new framework.
+This package contains a set of Pythonic bindings for Bob's Machines and Trainers.
 
 Documentation
 -------------
@@ -34,6 +33,7 @@ References
 ..   [ElShafey2014] *Laurent El Shafey, Chris McCool, Roy Wallace, Sebastien Marcel*. **'A Scalable Formulation of Probabilistic Linear Discriminant Analysis: Applied to Face Recognition'**, TPAMI'2014
 ..   [PrinceElder2007] *Prince and Elder*. **'Probabilistic Linear Discriminant Analysis for Inference About Identity'**, ICCV'2007
 ..   [LiFu2012] *Li, Fu, Mohammed, Elder and Prince*. **'Probabilistic Models for Inference about Identity'**,  TPAMI'2012
+..   [WikiEM] `Expectation Maximization <http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm>`_
 
 
 
-- 
GitLab