diff --git a/README.rst b/README.rst
index 2572f073b6a99d60d264fbc92ff32632de743d0d..aa7433dd180465a0eb8f64f6f48b67ab47e5f6b3 100644
--- a/README.rst
+++ b/README.rst
@@ -21,25 +21,22 @@
 ===========================
 
 This package provides pythonic bindings for Kaldi_ functionality so it can be
-seemlessly integrated with Python-based workflows. It is a part fo the signal-processing and machine learning toolbox
-Bob_.
+seamlessly integrated with Python-based workflows. It is a part fo the signal-
+processing and machine learning toolbox Bob_.
 
 
 Installation
 ------------
 
-To install the package, install firt bob, and then install the bob.kaldi package:
+This package depends on both Bob_ and Kaldi_. To install Bob_ follow our
+installation_ instructions. Kaldi_ is also bundled in our conda channnels which
+means you can install Kaldi_ using conda easily too. After you have installed
+Bob_, please follow these instructions to install Kaldi_ too.
 
-  $ conda install bob kaldi
+  # BOB_ENVIRONMENT is the name of your conda enviroment.
+  $ source activate BOB_ENVIRONMENT
+  $ conda install kaldi
   $ pip install bob.kaldi
-  
-To be able to work properly, some dependent packages are required to be installed.
-Please make sure that you have read the `Dependencies
-<https://github.com/idiap/bob/wiki/Dependencies>`_ for your operating system.
-
-This package also requires that Kaldi_ is properly installed alongside the
-Python interpreter you're using, under the directory ``<PREFIX>/lib/kaldi``,
-along with all necessary scripts and compiled binaries.
 
 
 Documentation
@@ -48,7 +45,7 @@ Documentation
 For further documentation on this package, please read the `Stable Version
 <http://pythonhosted.org/bob.kaldi/index.html>`_ or the `Latest Version
 <https://www.idiap.ch/software/bob/docs/latest/bioidiap/bob.kaldi/master/index.html>`_
-of the documentation.  For a list of tutorials on this or the other packages ob
+of the documentation.  For a list of tutorials on this or the other packages of
 Bob_, or information on submitting issues, asking questions and starting
 discussions, please visit its website.
 
diff --git a/doc/index.rst b/doc/index.rst
index 8003dc4f963e8060dda8c1510b21512d4c530dba..7ca83e4f3c59dc18940ce663c223d02c884b0372 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -14,7 +14,7 @@
    import bob.io.audio
    import tempfile
    import os
-   
+
 .. _bob.kaldi:
 
 
@@ -48,14 +48,14 @@ MFCC Extraction
 ---------------
 
 Two functions are implemented to extract MFCC features
-`bob.kaldi.mfcc` and `bob.kaldi.mfcc_from_path`. The former function
-accepts the speech samples as `numpy.ndarray`, whereas the latter the
-filename as `str`, returning the features as `numpy.ndarray`:
+:py:any:`bob.kaldi.mfcc` and :py:any:`bob.kaldi.mfcc_from_path`. The former function
+accepts the speech samples as :py:any:`numpy.ndarray`, whereas the latter the
+filename as :py:any:`str`, returning the features as :py:any:`numpy.ndarray`:
 
 1. `bob.kaldi.mfcc`
-   
+
    .. doctest::
-      
+
       >>> sample = pkg_resources.resource_filename('bob.kaldi', 'test/data/sample16k.wav')
       >>> data = bob.io.audio.reader(sample)
       >>> feat = bob.kaldi.mfcc(data.load()[0], data.rate, normalization=False)
@@ -63,20 +63,20 @@ filename as `str`, returning the features as `numpy.ndarray`:
       (317, 39)
 
 2. `bob.kaldi.mfcc_from_path`
-   
+
    .. doctest::
-      
+
       >>> sample = pkg_resources.resource_filename('bob.kaldi', 'test/data/sample16k.wav')
       >>> feat = bob.kaldi.mfcc_from_path(sample)
       >>> print (feat.shape)
       (317, 39)
 
-   
+
 ====================
  Speaker recognition
 ====================
-	   
-		   
+
+
 UBM training and evaluation
 ---------------------------
 
@@ -105,13 +105,13 @@ Following guide describes how to run whole speaker recognition experiments:
 1. To run the UBM-GMM with MAP adaptation speaker recognition experiment, run:
 
 .. code-block:: sh
-		
+
 	verify.py -d 'mobio-audio-male' -p 'energy-2gauss' -e 'mfcc-kaldi' -a 'gmm-kaldi' -s exp-gmm-kaldi --groups {dev,eval} -R '/your/work/directory/' -T '/your/temp/directory' -vv
 
 2. To run the ivector+plda speaker recognition experiment, run:
 
 .. code-block:: sh
-		
+
 	verify.py -d 'mobio-audio-male' -p 'energy-2gauss' -e 'mfcc-kaldi' -a 'ivector-plda-kaldi' -s exp-ivector-plda-kaldi --groups {dev,eval} -R '/your/work/directory/' -T '/your/temp/directory' -vv
 
 3. Results: