diff --git a/doc/running_mccn.md b/doc/running_mccn.md
index 87d2202b6a1229a18a4f3ed7f6def33f190dd769..1ab0d51f9774aefc1807571f4b8f3dd2ed30be1e 100644
--- a/doc/running_mccn.md
+++ b/doc/running_mccn.md
@@ -21,9 +21,10 @@ image with dimensions ``NUM_CHANNELSxHxW``.
 
 
 .. code-block:: sh
+
 	./bin/spoof.py \
-	<PATH_TO_CONFIG>/wmca_grandtest_dbconfig.py \
-	<PATH_TO_CONFIG>/wmca_config_pytorch_extractor.py \
+	wmca-all \
+	mccnn \
 	--execute-only preprocessing \
 	--sub-directory <PIPELINE_FOLDER> 
 	--grid idiap 
@@ -35,8 +36,8 @@ which is notated from here onwards as  ``<PREPROCESSED_FOLDER>``.
 
 Training MCCNN
 --------------
-Once the preprocessing is done, the next step is to train the MCCNN architecture. All the parameters required to train MCCNN are defined in the configuration file ``wmca_mccn.py`` file. 
-The ``wmca_mccn.py`` file should contain atleast the network definition and the dataset class to be used for training. 
+Once the preprocessing is done, the next step is to train the MCCNN architecture. All the parameters required to train MCCNN are defined in the configuration file ``wmca_mccnn.py`` file. 
+The ``wmca_mccnn.py`` file should contain atleast the network definition and the dataset class to be used for training. 
 It can also define the transforms, number of channels in mccnn, training parameters such as number of epochs, learning rate and so on.  
 
 
@@ -45,7 +46,7 @@ Once the config file is defined, training the network can be done with the follo
 .. code-block:: sh
 
     ./bin/train_mccnn.py \                   # script used for MCCNN training
-    <PATH_TO_TRAINER_CONFIG>/wmca_mccn.py \  # configuration file defining the MCCNN network, database, and training parameters
+    <PATH_TO_TRAINER_CONFIG>/wmca_mccnn.py \ # configuration file defining the MCCNN network, database, and training parameters
     -vv                                      # set verbosity level
 
 People in Idiap can benefit from GPU cluster, running the training as follows:
@@ -57,7 +58,7 @@ People in Idiap can benefit from GPU cluster, running the training as follows:
     --log-dir <FOLDER_TO_SAVE_THE_RESULTS>/logs/ \ # substitute the path to save the logs to (Idiap only)
     --environment="PYTHONUNBUFFERED=1" -- \        #
     ./bin/train_mccnn.py \                         # script used for MCCNN training
-    <PATH_TO_TRAINER_CONFIG>/wmca_mccn.py \        # configuration file defining the MCCNN network, database, and training parameters
+    <PATH_TO_TRAINER_CONFIG>/wmca_mccnn.py \       # configuration file defining the MCCNN network, database, and training parameters
     --use-gpu \                                    # enable the GPU mode
     -vv                                            # set verbosity level
 
@@ -68,7 +69,7 @@ For a more detailed documentation of functionality available in the training scr
 
     ./bin/train_mccnn.py --help   # note: remove ./bin/ if buildout is not used
 
-Please inspect the corresponding configuration file, ``wmca_mccn.py`` for example, for more details on how to define the database, network architecture and training parameters.
+Please inspect the corresponding configuration file, ``wmca_mccnn.py`` for example, for more details on how to define the database, network architecture and training parameters.
 
 The protocols, and channels used in the experiments can be easily configured in the configuration file.
 
@@ -84,8 +85,8 @@ For **grandtest** protocol this can be done as follows.
 .. code-block:: sh
 
 	./bin/spoof.py \
-	<PATH_TO_DATABASE_CONFIG>/wmca_grandtest_dbconfig.py \
-	<PATH_TO_EXTRACTORS>/wmca_config_pytorch_extractor.py \
+	wmca-all \
+	mccnn \
 	--protocol grandtest \
 	--sub-directory <FOLDER_TO_SAVE_MCCNN_RESULTS>  -vv